Documenting Problems That Were Difficult To Find The Answer To

Monthly Archives: May 2012

Getting Debian Or Ubuntu To Tell Me Upon Login If Any Packages Can Be Upgraded

It is not uncommon for modern installs of Debian and Ubuntu to tell you, upon logging in, if there are any packages that can be upgraded with the apt-get upgrade command.

However I have set this up to be done in a controlled manner so that, when I log into my Linux box, I will see something like this if there is a package to upgrade:

___ APT reports: ___________________________________________________
Inst linux-libc-dev [2.6.38-15.59] (2.6.38-15.60 Ubuntu:11.04/natty-updates [amd64])

If you want to do this yourself in a controlled manner you can do the following.

In /etc/crontab add the following two lines:

37 5 * * * root sh -c 'ping -q -w 4 && /usr/bin/apt-get update' 2>/dev/null >/dev/null
49 5 * * * root sh -c 'ping -q -w 4 && /usr/bin/apt-get -s upgrade' 2>/dev/null |/bin/grep -E "^Inst" >/tmp/apt-get-status.txt; chmod 644 /tmp/apt-get-status.txt

What do these lines do? The first pings to see if basic network connectivity is up (in case you’re running this on a server at home and your ASDL connection died during the night – then it performs a standard apt-get update command to update your local APT database.

The next line also pings for basic connectivity – then it performs a “test upgrade” with the -s option to apt-get – and greps for any “Install” instructions – putting those lines into a temporary file (/tmp/apt-get-status.txt).

Now, every day, the /tmp/apt-get-status.txt file contains a list of all packages ready to be upgraded. However – you don’t want to just print this file on login – because what if you do the recommended upgrade? You don’t want to be reminded you need to upgrade if you’ve upgraded! To protect against this it’s just a matter of comparing the datestamps on the temporary file against the APT history file – if the APT history file is newer then you probably don’t want to be reminded about upgrades for the rest of the day.

In your .bashrc add the following lines (maybe near the top):


# check if the temporary file is non-zero bytes in size
if [ -s ${APT_STATUS} ]; then
    # check if the temporary file is newer than the APT history log
    if [ ${APT_STATUS} -nt /var/log/apt/history.log ]; then
        echo "___ APT reports: ___________________________________________________"
        cat ${APT_STATUS}
        echo "--------------------------------------------------------------------"

This code checks to see that the file /tmp/apt-get-status.txt exists and is non-zero in size (if there are no packages to be upgraded then the grep command during the practice upgrade will output zero bytes into the file). If so it then checks that the temporary file is newer than the APT history log. Only then does it output the list of packages to be upgraded.

Perl Script to Calculate IP Blocks Given a Range of IP Addresses

Finding IP Blocks in a Range of IP Addresses

This is not an uncommon problem. You look through your web logs to find hack attempts from a range of IP addresses around the same area. To protect your machine you want to set up a series of iptables (firewall for Linux) rules for that general address range. You look up the address range using the whois tool – and sometimes the range of addresses allocated to an AS (autonomous system) are not a simple /24 or /16. So how do you quickly and easily work out the netblocks involved to feed your firewall?

I’ve written a simple Perl script below to calculate the IP blocks that fit within a given IP range (from the lower IP address to the upper IP address, inclusive).

If you save the below code as and call it thus you will get the following output:

user@host:~$ perl -w
#!/usr/bin/perl -w

use strict;

sub ipstr_to_int {
    my ( $s ) = @_;
    my @z = split( /\./, $s );
    return( ($z[0] << 24) + ($z[1] << 16) + ($z[2] << 8) + $z[3] );

sub int_to_ipstr {
    my ( $i ) = @_;
    my $s = sprintf( 
        "%d.%d.%d.%d", ($i>>24)&0xFF, ($i>>16)&0xFF, ($i>>8)&0xFF, $i & 0xFF
    return( $s );

my @ip_masks = (

my @ip_ranges = (

sub calc_ranges {
    my ( $ip_from, $ip_to ) = @_;

    # convert IP strings into integers
    my ( $int_ip_from ) = ipstr_to_int( $ip_from );
    my ( $int_ip_to ) = ipstr_to_int( $ip_to );
    my @results = (); # holding variable for results
    # iterate until we've found blocks for all parts of the range
    my $int_ip_next = $int_ip_from;
    while ( $int_ip_next <= $int_ip_to ) {
        # find widest netblock/mask that commences from this IP
        my $netmask_bits = 31; # we could start at 32 but pointless
        while ( $netmask_bits > 0 ) {
            # check - does this block start below our minimum address?
            my $mask = $ip_masks[$netmask_bits];
            last if ( $int_ip_next != ( $int_ip_next & $mask ) );
            # check - does this block exceed the range we want?
            my $this_block_ip_to = $int_ip_next + $ip_ranges[$netmask_bits] - 1;
            last if ( $this_block_ip_to > $int_ip_to );
            # widen netmask/range for this block and try again
        # last block was 1 bit too wide
        push( @results, int_to_ipstr( $int_ip_next ) . "/$netmask_bits" );
        $int_ip_next += $ip_ranges[$netmask_bits];
    return( @results );

# demonstration routine
# you can call from command line like:
#   perl -w
my $from = shift;
my $to = shift;
my @results = calc_ranges( $from, $to );
print( "  $_\n" ) foreach ( @results );

In the code I’ve used hard coded arrays for the binary netmask and IP range. Subroutines can be used instead but, of course, they will be a lot slower – especially given that for every block up to 32 netmasks can be “tried” for fit.

This Perl code should be easily/trivially translated to other languages if you have the need for such a tool yourself.

Software vs Hardware as a Career

It is becoming more and more apparent to me that hardware – computers, mobile phones, home servers, embedded machines, and cloud computing – is little more than a commodity these days. Most people have smart phones now – and storage is cheap and plentiful.

We are up to our eyeballs in cheap CPUs. ARM processors, Intel chips – 32 bit processors are everywhere. A CPU isn’t designed for a single purpose – it is designed to be as generic as possible to accomplish as many varied tasks as possible.

One CPU can do many, many different things. Thus for one CPU there are an almost infinite number of software packages that can be written to utilise that CPU.

Whilst it is true that one can make a processor that specialises in a particular task (graphics, signal processing, etc) we have cheap plentiful access to very generic and capable processors in commodity quantities and prices.

Software Is Not A Commodity

Software is not, yet, a commodity. With such flexible hardware out there – one chip does many things – we still do not have such broad flexibility in software (one program doing many things). About the closest thing we have to that is spreadsheets and web browsers. And yet even spreadsheets must be programmed and web browsers require websites to be written for them.

There will only be an ever increasing demand for software engineers/programmers in the years to come. As our world becomes flooded with generic all-purpose CPUs everywhere (they will only become more prolific in the household) there will be an ever increasing demand for those CPUs to be utilised.

Visualising Growth

The gap between software possibility, actual implementation, and hardware capability

The gap between software possibility, actual implementation, and hardware capability

In my personal view of the world it has appeared that hardware capability has observed, more or less, Moore’s Law – which is roughly a linear (straight line) growth of CPU performance over the years. And I believe that the potential utilisation of a CPU in software is exponential in comparison to the increase in capability in hardware.

When you consider what a Commodore 64 was able to do – with 64KB (kilobytes) of memory – it really is mind-blowing. Of course back then games were often written in assembler – manually optimised with every instruction carefully considered. Yet games had sound, responsiveness, and playability. If you extrapolate that to multi-core processors with gigabytes of memory – and solid state disk drives – the mind boggles at what we could be capable of doing if we optimised our applications as much as we could.

Of course commercial realities prevent software from being all it could possibly be. With the incredible growth in hardware capabilities has been the realisation that we can be lazier with software development. With stronger-typed languages, garbage collection, and numerous other protections our less efficient software can give us what we need albeit somewhat slower than the potential. Hence we have an actual software utilisation rate that is more closely matched to the hardware than actual potential.

What this leaves is a vacuum of potential in the software world. And given where we are today in terms of hardware capability we may never fill this gap in spite of all the effort in the world. Hardware manufacturers will still try and grow their capabilities – but you only need a small number of hardware engineers to do this: meanwhile those few hardware engineers design a chip that gets pumped out of factories in their multi-millions and the software industry has the job of taking that quantity of chips and processing power and putting it to use – and that’s not a small job!

Bad Memory!

As part of an upgrade to my PC with an Ivy Bridge i5-3570K processor I bought a 4 DIMM pack of Corsair 16GB (4x4GB) DDR3 1600Mhz Vengeance “low Profile” Memory Kit CL9 1.5.

A couple of times in the evening after upgrading my PC I got Blue Screens of Death. Such as the following:

Blue Screen of Death - MEMORY_MANAGEMENT

Blue Screen of Death – MEMORY_MANAGEMENT

This said:

A problem has been detected and windows has been shut down to prevent damage
to your computer.


If this is the first time you've seen this Stop error screen,
restart your computer. If this screen appears again, follow
these steps:

Check to make sure any new hardware or software is properly installed.
If this is a new installation, ask your hardware or software manufacturer
for any windows updates you might need.

If problems continue, disable or remove any newly installed hardware
or software. Disable BIOS memory options such as caching or shadowing.
If you need to use Safe Mode to remove or disable components, restart
your computer, press F8 to select Advanced Startup Options, and then
select Safe Mode.

Technical information:

*** STOP: 0x0000001A (...)

Collecting data for crash dump ...
Initializing disk for crash dump ...
Beginning dump of physical memory.
Dumping physical memory to disk: 80

A prior Blue Screen of Death (BSOD) mentioned something about a system exception.

After searching the Internet I eventually followed instructions and downloaded onto a USB flash drive the latest memtest86+ program (at The Windows USB installer worked for me.

So I booted into the flash drive which runs the memtest – and immediately started finding errors. So I used the old “binary search” algorithm (where you divide the problem by 2 over and over until you isolate the problem).

I ripped out 2 of the 4 memory DIMMS – and still encountered errors. Then I ripped out one more DIMM. No more errors. Now I knew which DIMM was faulty. I installed that as the sole memory DIMM and re-ran the memory test – errors! This is what it looked like:

Memtest86+ and a DIMM with errors

Memtest86+ and a DIMM with errors

Note that the memory test constantly identifies two upper bytes as faulty – the remaining six are fine – which could indicate faulty chips on the DIMM.

Then I pulled out the faulty DIMM and put in the 3 okay DIMMS. Re-ran the memory test. No problems.

I recommend this approach to solving memory problems. At first I thought it might be a timing issue. Fortunately it is as simply as a faulty DIMM.

2012-05-23 Update

I had, delivered on 2012-05-21, another 16GB of RAM as an 8GB set of Corsair Vengeance 8GB (2x4GB) DDR3 1600Mhz CL9 1.5V Non-ECC (black) and an 8GB set of Corsair Vengeance 8GB (2X4GB) DDR3 1600Mhz Memory Kit CL9 1.5V Non-ECC (red) – both 9-9-9-24.

Of course this time the first thing I did was run a full memtest86+ test set on the RAM. All clear! No blue screens of death in the last 2 days including leaving the system running overnight and various suspends/wakes.

Here is a picture of the red and black Corsair full height memory DIMMs in the running board (the fan appears stopped because of the flash but was actually spinning):

Corsair Memory In Ivy Bridge Motherboard

Corsair Memory In Ivy Bridge Motherboard

Getting NVidia 9600 GT Working With Ivy Bridge i5

I have been running a Windows 7 64-bit system on an Intel(R) Core(TM)2 Duo CPU E7400 for some years. I decided it was time to upgrade.

I bought an Intel Core i5 3570K (Ivy Bridge) with an ASUSTeK P8H77-V LE motherboard and 16GB (4x4GB DIMMS) Corsair 1600 DDR3 RAM.

It took about an hour to rip out the old motherboard and put the new one in and re-assemble everything. Powered on successfully.

But had some problems. Mainly I couldn’t get the NVidia 9600 GT graphics card (which was slotted into the PCI-E slot nearest the CPU on the motherboard) recognised by Microsoft Windows.

First I installed the ASUSTeK motherboard drivers off the supplied DVD.

Then I had the VGA display driver installed.

But then the keyboard wouldn’t work after boot. Windows would just sit there at the log-in screen permitting mouse movement but no keyboard entry. Rebooting into Windows Safe Mode (press F8 during boot) made the keyboard work again – and I uninstalled the HD Graphics 4000 (integrated i5 chip graphics) driver.

So when I went into Device Manager (start -> right click on Computer -> left click on Manage -> select Device Manager) I would see the following:

Device Manager Before NVidia Installation

Device Manager Before NVidia Installation

Now – I was scratching my head – why couldn’t Windows 7 see my NVidia card? I started up a tool called “CPU-Z” and that detected the presence of 2 video cards – one of which being an unidentified NVidia card. I had hope.

I went onto the website I then clicked on “auto-detect your GPU”:

GeForce Website Auto Detect Page

GeForce Website Auto Detect Page

The website asked me if I wanted to run some software – I accepted – and got the following webpage:

GeForce Website Card Detected Page

GeForce Website Card Detected Page

My card was successfully detected in my system! I went ahead and downloaded the suggested driver and ran it.

After rebooting my second display (I had it in dual display mode prior to the motherboard upgrade) came alive – with everything the same (including wide wallpaper) as prior to the upgrade.

Here is what my Device Manager window looks like now:

Device Manager After NVidia Driver Installation

Device Manager After NVidia Driver Installation

As you can see the NVIDIA GeForce 9600 GT card is present and installed.