Apr 162015

I’ve been setting up SNMP Traps on Zabbix 2.4 to replace our current in place monitoring solution.
One of the hurdles that I’ve come across is trying to get all the traps setup.

An easy way of doing this is getting the MIB files for the traps that you’re getting, and converting them into configuration files for SNMPTT to use to parse the traps.
The snmpttconvertmib command will take a MIB file as an input, and spit out a configuration file suitable for SNMPTT.
Using an Oracle MIB file as an example –

snmpttconvertmib --in=ORACLE-ENTERPRISE-MANAGER-4-MIB.mib --out=/etc/snmp/snmptt.conf.ora-em4

This will produce a file for SNMPTT but Zabbix will not parse the traps yet as the FORMAT line isn’t quite what we need yet.
Next, we’ll use sed to do a global search and replace to make sure the FORMAT lines conform to the format that Zabbix requires.

sed -i 's/FORMAT/FORMAT ZBXTRAP $aA/g' /etc/snmp/snmptt.conf.ora-em4

The configuration file then needs to be added to the list of files that SNMPTT uses to parse the traps.
Open /etc/snmp/snmptt.ini file – assuming it’s in the default location – and scroll right down to the bottom of the file.
You will see the following lines –

snmptt_conf_files = < /etc/snmp/snmptt.conf

Add the file you've just created to the end like so -

snmptt_conf_files = < /etc/snmp/snmptt.conf

And you should start getting SNMP traps appearing in Zabbix - assuming you've already set up the item.

Dec 222014

I recently had some issues with my single pfSense VM crashing, bringing down the whole entire network with it.

I thought the problem was flakey hardware, so I setup a second pfSense VM…and that crashed too.
So I decided to setup pfSense in high availability mode with CARP. The only problem there being that I’m on ADSL, with a single modem to share between 2 pfSense servers.

After I followed the CARP Guide from pfSense, I’d end up with 2 PPPoE sessions open. One from each pfSense server.

The solution to that was to change the WAN interface configuration on the backup CARP node to be a dial-on-demand configuration, and by disabling apinger by disabling Gateway Monitoring. With this configuration, since the backup node has no traffic directed at it, the WAN link stays down until the primary CARP node goes down. Then, the backup node will establish a PPPoE session to the internet.

Dec 112014

I’ve had to calculate working time between 2 dates as part of a dashboarding project.
As a result, I’ve built a function to do this for me.

First though I needed the public holidays to make sure that public holidays weren’t counted.
As an example, I’ve only used a couple of days, formatted like ’24-04′.
I’ve stored this in an external php file called hols.php, however this can also be contained within the script.

$hols = array();

$hols[] = date("d-m",strtotime("last friday",easter_date())); // easter friday
$hols[] = date("d-m",strtotime("next monday",easter_date())); // easter monday;

Then for the main function. This takes 2 dateTime objects to compare. If the start is after the end, then it will return 0.

function work_minutes($dtStart,$dtEnd) {
if($dtStart > $dtEnd) {
return 0;

$di1Day = new DateInterval('P1D');
$workStartHour = 7; // When the work day starts
$workEndHour = 17; // When the work day ends
$workMinutes = 0; // Initialise the running counter that keeps tracks of the minutes of working time

$diffDays = $dtStart->diff($dtEnd)->format("%a"); // Get the number of days between the 2 timestamps, and add a day to ensure that all the days are checked.

for($x = 0; $x <= $diffDays+1; $x++) {
if($dtStart->format('N') < 6) { // checks that it's on a monday-friday
if(!in_array($dtStart->format('d-m'),$hols)) { // checks that it's not a public holiday

// Create a couple of new DateTime objects to define the start and end of
// the working day. These will be used to compare against when looping
// through each day to calculate the working minutes.

$dtStartOfDay = new DateTime($dtStart->format('Y-m-d').' '.$workStartHour.':00:00');
$dtEndOfDay = new DateTime($dtStart->format('Y-m-d').' '.$workEndHour.':00:00');
$signStart = '';
$signEnd = '';

// Compare the start of the day, if the starting DateTime Object is before the start of the working day, then the script will calculate the working time from
// the start of the day rather than the starting DateTime Object as any time before the start of the working day is irrelevant.
// The end DateTime objects are compared in a similar way. If the end DateTime object is before the end of the working day, that is used to compare, otherwise
// the end of the day is used to compare.

if($dtStartOfDay >= $dtStart) {
if($dtEndOfDay <= $dtEnd){
$arrDiff = explode(' ',$dtStartOfDay->diff($dtEndOfDay)->format('%H %i));
$signStart = $dtStartOfDay->diff($dtEndOfDay)->format('%R');
} else {
$arrDiff = explode(' ',$dtStartOfDay->diff($dtEnd)->format('%H %i));
$signStart = $dtStartOfDay->diff($dtEnd)->format('%R');

// When the starting DateTime object provided is after the start of the working day, then that will be used to calculate working minutes.
} else {
if($dtEndOfDay <= $dtEnd){
$arrDiff = explode(' ',$dtStart->diff($dtEndOfDay)->format('%H %i));
$signEnd = $dtStart->diff($dtEndOfDay)->format('%R');
} else {
$arrDiff = explode(' ',$dtStart->diff($dtEnd)->format('%H %i));
$signEnd = $dtStart->diff($dtEnd)->format('%R');

// intDiff contains the amount of minutes that was calculated as the amount of time between the start DateTime/Start of the day and end DateTime/End of the day.
$intDiff = $arrDiff[0]*60+$arrDiff[1];
// if theres any negative values, e.g. Starting DateTime stamp was after the end DateTime, the value is ignored, otherwise it adds to the running tally.
if($signStart != '-' && $signEnd != '-') {
$workMinutes += $intDiff;
// Add a day, and loop again.

return $workMinutes;

Apr 302014

I have Spectrum integrated with Service Now in order to raise Incidents automatically when certain critical alerts come through on CA Spectrum

As a pre-requisite, Spectrum needs a user configured in Service Now with the ability to create records in the Incident table.

  1. Spectrum Configuration
  2. I have alarm notifiers setup to catch events that require an automated incident. The alarm notifier SetScript then calls a custom perl script in order to send the web services request to Service Now. The alarm notifiers are configured

    The SetScript has also been modified to parse out some extra parameters from the policy’s Notification Data. The notification data is accessed from the script through the variable “$NOTIFDATA”.
    I use a comma as a delimiter in the notification data to create fields, so the notification data within the policy looks like this

    Network Team,Network,Infrastructure

    The NOTIFDATA variable is expanded, and then assigned a variable inside the SetScript with the following bash script snippet

    declare -a ENOTIFDATA
    for x in $NOTIFDATA; do
    ENOTIFDATA=( "${ENOTIFDATA[@]}" "$x" )


    The SetScript then calls the Perl script with some arguments. The perl script will make the calls to Service Now.

    $SPECROOT/custom/scripts/SNow/RaiseInc.pl $AID "$AssignmentGroup" "Autogenerated - A $SEV alarm has occurred on $MNAME" "$EVENTMSG" "$Category" "$SubCat" "$SEV"

  3. Perl Script
  4. The Perl script is built from examples available from ServiceNow.

    The following example will need to be modified to suit your environment – specifically the parameters as different organisations will have different configurations for ServiceNow.


    #use lib '/usr/lib/perl5/custom';

    # declare usage of SOAP::Lite

    use SOAP::Lite;
    use feature 'switch';

    # specifying this subroutine, causes basic auth to use
    # its credentials when challenged
    sub SOAP::Transport::HTTP::Client::get_basic_credentials {
    # login as the itil user

    return 'spectrum_user' => 'spectrum_password';

    # declare the SOAP endpoint here
    my $soap = SOAP::Lite
    -> proxy('https://instance.service-now.com/incident.do?SOAP');

    # calling the insert function
    my $method = SOAP::Data->name('insert')
    ->attr({xmlns => 'http://www.service-now.com/'});

    # create a new incident with the following short_description and category
    my @params = ( SOAP::Data->name(short_description => $ARGV[2]) );
    push(@params, SOAP::Data->name(u_requestor => 'Spectrum 9.3') );
    push(@params, SOAP::Data->name(contact_type => 'Auto Monitoring') );
    push(@params, SOAP::Data->name(description => $ARGV[3]) );
    push(@params, SOAP::Data->name(u_business_service => $ARGV[4]) );
    push(@params, SOAP::Data->name(assignment_group => $ARGV[1] ) );

    given ($ARGV[6]) {
    when ("MINOR") {
    push(@params, SOAP::Data->name(urgency => '3') );
    when ("MAJOR") {
    push(@params, SOAP::Data->name(urgency => '3') );
    when ("CRITICAL") {
    push(@params, SOAP::Data->name(urgency => '3') );
    default {
    push(@params, SOAP::Data->name(urgency => '3') );

    # invoke the SOAP call
    my $result = $soap->call($method => @params);

    # print any SOAP faults that get returned
    if ($result->fault) {
    exec 'echo Incident Raising Error. Please check spectrum logs | mail -h smtp.dmz.localnet -s "Issue Raising Incident" admin@example.com';
    # print the SOAP response that get return

    # convenient subroutine for printing all results
    sub print_result {
    my ($result) = @_;

    # convenient subroutine for printing all SOAP faults
    sub print_fault {
    my ($result) = @_;

    if ($result->fault) {
    print "faultcode=" . $result->fault->{'faultcode'} . "\n";
    print "faultstring=" . $result->fault->{'faultstring'} . "\n";
    print "detail=" . $result->fault->{'detail'} . "\n";



Mar 102014

I’ve been trying to use Cacti to graph my ADSL’s Sync rate and SNR/Attenuation Ratios for the past few weeks as I’ve been having issues with my ADSL.

Originally, I was using a BigPond Thomson ST536v6, but unfortunately, the SNMP agent on the Thomson will only expose the Sync Rate, and not the SNR and Attenuation.

So I have decided to use an old SpeedStream 4200 instead. The default SNMP community string is ‘public’ but I wanted to change it to my private one.

To change the SNMP community string, you need to telnet onto the modem to change it.

Once you’ve telnetted in, you can show the current snmp settings with this command –
xsh> cfg snmp
The output will show you the current configuration.
nam = ""
rd = n
wr = n
dsbl = n

These settings just mean that the default settings are applied.
To update the snmp community string, you need to use the following command –

cfg snmp{comm#0{nam=skynet
cfg snmp{comm#0{rd=y

Those 2 lines will set the community string to “skynet” and set the permissions to readonly.

After setting these, run the command cfg save to save the configuration, and then reboot the modem. This will allow the new settings to take affect.

Dec 182013

In my recent FreeNAS blog post, I couldn’t access the FreeNAS web GUI with the Google Chrome Browser. Trying to access a page would present a 400 Bad Request - request header or cookie too large error.

After doing some research online, I found out that the way to solve this issue was to modify the nginx configuration to allow for large headers.

As this was FreeBSD, the configuration file for nginx was located at /etc/local/nginx/nginx.conf. In a Linux distribution, the file would likely be located in /etc/nginx/nginx.conf
In that file, we need to add this line within the server stanza.
large_client_header_buffers 4 16k;

I’ve modified my nginx.conf file so now it look like this –

server {
server_name localhost;

large_client_header_buffers 4 16k;

After modifying the configuration file, nginx needs to be restarted.
On FreeBSD, the command to restart nginx is /usr/local/etc/rc.d/nginx restart
On Linux, the command varies by distribution, but in general would be something like /etc/init.d/nginx restart

After restarting nginx, I was able to access the FreeNAS web interface with the Chrome Browser

Dec 122013

In this part, I’m going to go through FreeNAS 9.1.1 from installation to setup and use of the NAS.

FreeNAS has the easiest installation of them all, coming in an .xz file all I had to do was write the image file to the install media – which in this case is a USB thumbstick.

To write the image on a Linux system I used dd and xzcat

# xzcat FreeNAS-9.1.1-RELEASE-x64.img.xz | dd of=/dev/sdh bs=64k

Booting the NAS from the USB stick brings up FreeNAS with it’s web interface running on the normal http port.

Booting after installation took around 8 minutes, though I’m not sure whether it’s my hardware or the USB stick that’s causing the delay, but any bootups after the first one only took around 3-4 minutes.
Keeping in mind though that I had not customised FreeNAS yet, so boot times may be improved by tweaking the settings.

After the NAS booted up, I could access the NAS on it’s web interface using FireFox. Chrome wouldn’t play nice though.

The web interface is very smooth and quick to respond. All configuration can be done from the web interface, from storage operations to networking operations to showing system data. It’s all neatly organised in the menu, with some quick shortcuts along the top. Each function that is opened is opened as a tab, allowing you to flick between tasks quickly and efficiently, without having to navigate through the menu again.

You can see the Reporting, Settings, and System Information Tabs open here

First things first, I had to correct my timezone.
Clicking on the Settings tab at the top provided me some further tabs, as well as the following settings

Updating my Timezone was a simple matter of picking the right one and clicking on save.

Clicking on Reporting shows me a nice graphical overview with some history on the system.

This will allow me to keep an eye on the System to see how well it’s doing under the load of FreeNAS.

I also needed to configure my Default Gateway and Nameserver settings. The DHCP client mustn’t have gotten the settings from my DHCP server.
This is also easily done, just by clicking on the Network button at the top, which brings up the Global Configuration

Next up, I’ll need to actually assign my hard drives to a Volume, or Pool depending on whether I use UFS or ZFS.

Creating Volumes or Pools which are used to store files on was very easy. Clicking on Storage and then ZFS Volume Manager allowed me to create a new volume with the 2 500 GB hard drives that I had in my NAS, setting them up as RAID1 or Mirror configuration.

Once I’ve gone through the volume manager, I can see my new Pool sitting there waiting for me to dump some data onto it

I had other options too – I could have used only 1 drive for my NAS, or had 2 separate devices within that pool created.

Doing something fatal like detaching the volume painted the screen a bright red, instantly making me aware of the dangers of the action.

Don’t think I’ll be doing that just yet.

Volumes are mounted under the /mnt/ directory, so my just created RAID1 volume can be found under /mnt/RAID1/

It takes just a few clicks to create datasets. Datasets allow you to treat a subdirectory like a filesystem with access controls, compression, and snapshot ability.

Clicking on Storage at the top, then selecting RAID1, and then clicking the New Dataset button down the bottom brings up the new dataset window.
Enter in a name, click add dataset, and it’s created !

Now, time to setup some shares so I can dump data onto the NAS.
Clicking on the big Sharing button brings up the tabs for Apple (AFP), UNIX (NFS), and Windows (CIFS).
Since I don’t have any Apple or Windows devices, I’ll start setting up a NFS share.
Clicking on the UNIX (NFS) tab gives me a Add Unix (NFS) Share button.
Easy enough.

Clicking on that button brings up this window, allowing me to configure the settings for my new NFS share.

Setting up an authorised network, and the path that the share pointed to was easy enough.
After creating the new NFS share, it even asked me if I wanted to enable the NFS service.

After enabling it, I was able to poke it to see if it existed with the showmount command from another computer.

# showmount -e
Export list for

As you can see, poking it showed that it was available.
I can now mount the NFS share with a simple command from any linux pc –
mount /mnt/point

With it mounted, I can now copy and paste files to the NAS !

In my next post, I’ll have a look at FreeNAS’s plugins.

Dec 042013

I’ve been wanting to obtain a NAS of some sort for a while now, and after seeing some of the abilities of the Synology NAS enclosures, I was set on just buying one of the Synologies.
However, after looking at the cost of the 4 bay NAS, I wasn’t so sure I could shell out for one.

So I’ve decided to build my own NAS rather than buying a pre-built one.
The pros of building my own is that it’s more flexible than the pre-built ones, and plus I also get some experience with some more Linux Distros on the side !

I’m using some old recycled hardware to save on money as I wanted to shell out as little as possible.
I’ve managed to scrounge up some old old parts to host this NAS on –

  • Asus A8N-SLI Deluxe Motherboard
  • AMD Athlon64 3200
  • 4GB DDR400 Ram
  • Generic Case
  • thermaltake 430W Power Supply
  • A few 500GB Hard drives

Not exactly the latest and greatest, but it should do for the purposes of serving up a few files and whatnot.

The first thing I need to do after I’ve got my hardware, is to choose a Distro.
I’m going to try three distros before I settle on one to actually use as my NAS, just so I can get a feel of the pros and cons of the different distros.
The three that I’ve chosen for this particular project are –

All 3 distros are free to download, so there’s no cost involved in obtaining the distro itself. Support can be bought for FreeNAS and OpenFiler.

A quick rundown on the features, pros, and cons that I’ve found of these 3 Distros so far –



  • Replication – File system snapshots
  • Data Protection – Raid Z/Z2/Z3
  • Backup Services – Windows Backup / Apple Time Machine / Linux rsync / BSD Life-Preserver
  • Encryption – Volume level encryption
  • File Sharing – CIFS/NFS/AFP/FTP/iSCSI + more
  • Web Interface – No CLI required
  • Plugins – Add functionality easily


  • Slick Web Interface
  • Lots of plugins available


  • Requires dedicated install drive
  • Higher Hardware Requirements
  • Not many plugins out of the box



  • Based on Debian – Has all the normal Linux Features – apt/cron/avahi/Volume Management
  • Web Interface – No CLI Required
  • Plugins – Add functionality easily
  • Link Aggregation – Make two NICs act like one
  • Wake On Lan – Wake up the computer remotely
  • Monitoring – The normal Linux monitoring abilities – Syslog/Watchdog/SMART/SNMP/etc.
  • Services – The normal Linux services – SSH/FTP/TFTP/NFS/CIFS/rsync


  • Nice Web Interface
  • Standard Debian shell and commands
  • Low System Requirements


  • Requires dedicated install drive
  • Not Many Plugins out of the box



  • RAID support – Supports Hardware and Software RAID
  • Clustering – Supports clusters with block level replication
  • Multipath I/O – supports Multipathing
  • Based on the Linux 2.6 kernel
  • Scalable – Can do online resizing of filesystems and volumes
  • Volume Sharing – iSCSI / Fibre Channel
  • File Sharing – CIFS/NFS/HTTP DAV/FTP/rsync
  • Web Interface – No CLI Required
  • Quotas – User and Group quotas
  • Based on rPath Linux


  • Nice Web Interface
  • Doesn’t require dedicated install media
  • Low System Requirements


  • Not many plugins out of the box

In the next part of this series, I will explore FreeNAS and see what it can do for me.
Originally I was having some issue installing it, so hopefully this time around I can get it to install !

Stay tuned for more :)

Nov 212013

I’ve just published my second Android App !
Again, for Perth Drivers, this time it’s a simple list of today’s multanova locations

At the moment, it’s very simple but I’m planning on building more features as I go along.
It updates daily from the WA Police website, and allows you to touch the location to search it on Google Maps.

You can get it from the play store

Hopefully I can get some feed back from users of this app ! :)

Nov 142013

I wanted to rename a whole bunch of models to transform their names into all lowercase with just the hostname rather than the FQDN.
I used this script to do it with bash and vnmsh. The script will loop through all models found by the query with a model type handle, and then renames then with a vnmsh update command.



## Pingable
MDLLIST=`$WORKPATH/show models mth=0x10290`
MDLLIST=`echo "$MDLLIST" | grep -vi mname`

for x in $MDLLIST; do
MDLHANDLE=`echo $x | awk -F '[ |.]+' ' { print $1 } '`
MDLNAME=`echo $x | awk -F '[ |.]+' ' { print tolower($2) }'`

$WORKPATH/update mh=$MDLHANDLE attr=0x1006e,val=$MDLNAME