Category Archives: Technical

Measure All Of The Things!

I spend much of what free time I have doing research and development, working towards having a hardware design side of my business. There is currently a fair amount of overlap between this work and my near-obsession with measurement.

With a growing collection of odd and vintage measuring equipment and the design of my own, I decided that I would start to share images and explanations of some of this and thus created a new blog, (Measure All Of The Things.)

So far, I have written about Geiger-Müller tubes, electrostatic voltmeters, and a vintage Japanese milliameter I was fortunate to acquire. The next article I have planned is a description of large voltage divider I have been working on for a few months. (The divider itself is complete and tested, I am just awaiting for the laser-cut parts for the case to turn up from Ponoko in New Zealand.)

If this is your type of thing, please join me at MAOTT!

Freedom Board Resources

Introducing The Freedom Board

This article lists some resources useful for experimenting/working/having fun with the Freescale Semiconductor / Element14 Freedom Board KL25z.

The Freedom Board KL25z is an inexpensive (about 12 AUD) Arduino form-factor compatible platform which sports a microcontroller with a 32-bit ARM Cortex M0+ core, rather than the humble 8-bit AVR CPU of the Arduino itself. The board includes an OpenSDA debugger/programmer so no other hardware is required, other than a USB cable.

Official Documentation

Other Resources

At the time of writing, it appears that the Codewarrior for MCU 10.3 beta is no longer available. This is a shame as, using gcc for ARM, this beta (Windows only) gave unlimited code size. The regular Special Edition (free) only allows up to 64kb code size. This isn’t to say that other development environments can’t be used, as an ARM Cortex M0+ is an ARM Cortex M0+, whatever the manufacturer. However, I like to use this Eclipse-based IDE as it features the excellent Processor Expert, which allows rapid configuration and code generation for on-board peripherals and common tasks.

Another reason I like to use Codewarrior is that Erich Styger’s blog is an absolutely first class learning resource for both Codewarrior/Processor Expert and the Freedom Board itself. Combine this with the Freescale Community site, and you will be well-supported in your efforts to make your Freedom Board do Cool Stuff.


I – and others – have been through some very frustrating times with the Freedom Board due, in my mind, to poor documentation. It will not debug from Codewarrior out of the box. The supplied firmware allows for drag-and-drop programming. To go to more conventional debug/programme with Codewarrior, it is necessary to change the firmware to get the full benefit of the OpenSDA goodness. Erich Styger describes the necessary process here.


All in all, the Freedom Board KL25z is an excellent tool at an exceptional price – made all the more valuable when combined with Erich Styger’s learning resources.

For those interested, an alternative product from Texas Instruments exists in the Stellaris Lauchpad. This ARM Cortex M4F-based tool comes in at a similar price point. Rather than following Arduino form-factor, the Stellaris Launchpad follows on from TI’s previous MSP430 Launchpad, and is compatible with some of the Booster Packs (equivalent concept to the Arduino shield.)

Whether experimenter, student, or embedded professional wanting to do rapid prototyping, the Freedom Board and the Stellaris Launchpad have made working with ARM Cortex microcontrollers very simple and affordable.

Downgrading Iceweasel on Debian Wheezy


By default, Debian Wheezy installs Iceweasel (Firefox) 10.x. This can break a lot of extensions so, for those who would rather stick with version 3.x, this is how I reverted to the older version. Note that there may be alternative, ” official” method, but this is the quick-and-dirty that I came up with.

Disclaimer: this worked for me, your mileage may vary. Use these instructions at your own risk. (If you do come across any issues, please get in touch – especially if you find solutions – so I can amend this article.)

Get The Packages

At the time of writing, the current, stable, version of Debian is Squeeze – which installs Iceweasel 3.x by default. We will, therefore, make use of Squeeze packages:

Download packages appropriate for your architecture, install using dpkg -i, in the order given above. I think this is the correct sequence to solve all dependencies. Other dependencies may be unmet due to different software configurations. Simply note what’s missing from the messages given by dpkg and find the packages through


Due to the rather onerous process of creating XUL-based extensions (and for other reasons) I am in the process of migrating to the Chromium browser. I went through this exercise because I can’t commit the time to either writing the Chromium extension or fixing the Firefox/Iceweasel one. With Chris Pederick’s Web Developer Toolbar now being available for Chrome/Chromium, my one little extension is the only remaining reason to be using Firefox/Iceweasel as my main browser.

Recent Versions of the Chromium Browser in Debian, Ubuntu


I'm a big fan of the Debian Linux distribution but one thing that can be a problem is the age of some packages, especially web browsers. Needing to work with a recent version of the Chromium browser, this is how I managed to get it installed without going through the horrendous process of building from source.

Looking for Google Chrome?

If you are happy to trust Google's assertion about not being evil, you might be happy to use a build of Chrome. Betas and development versions are available through the Chrome Release Channels page.

Chromium Builds for Debian and Ubuntu

Recent Ubuntu builds are available from the Personal Package Archives (PPA) at Ubuntu users can get instructions on how to use PPAs on the How do I user software from a PPA? page.

As the Ubuntu add-apt-repository tool is not available under Debian, Anant Shrivastava has a solution here. Once you have this tool installed, the rest is simple:

sudo add-apt-repository ppa:chromium-daily/beta
sudo apt-get install chromium-browser

Already having an old version of chromium-browser installed, this got me:

The following extra packages will be installed:
chromium-browser-l10n chromium-codecs-ffmpeg
The following NEW packages will be installed:
chromium-browser-l10n chromium-codecs-ffmpeg
The following packages will be upgraded:

Which, so far, is working just fine. Note that this is not guaranteed to work. Whilst it worked without problems on my Squeeze AMD64 installation, your mileage may vary.

Where's WebRTC?

My reason for installing a recent version of Chromium was to play with WebRTC. Unfortunately, the Chromium build from the PPA does not have the necessary components enabled at compile time so, at this point, the Google Chrome build will be required. I will, however, be experimenting with Chromium for other purposes, including writing extensions, which appears to be a trivial process compared to battling Mozilla's XUL.

As WebRTC is new and scary, whilst it is enabled at compile time in Chrome builds, it needs to be enabled in the New and Scary, This May Wreck Your Browser controls, which may be found by navigating to chrome://flags/ Further information may be found on the not-very-up-to-date Running the Demos page at

Debian for vi users in Australia


Just about every computer I run, from my servers to my Raspberry Pi, is running some form of Debian Linux. For every installation I do, I have to go through a series of post-installation steps to get the system working the way I want it to. As I do not perform installations on an everyday basis, every time I do one, I have to go look up the various Debian-specific re-configuration commands required. This time I am recording them, and hope that they may be of use to others.

Note that this does not just apply to vi users in Australia – make appropriate substitutions, and you can be an EMACS user in Denmark, if you so wish.

Get Up To Date!

sudo apt-get update
sudo apt-get upgrade

Configure Locale

sudo dpkg-reconfigure locales

I generally check en_AU.UTF-8, en_GB.UTF-8, en_US.UTF-8. On the following screen, I select en_AU.UTF-8 as the default locale.

For the Raspberry Pi, setting the default locale fixes the keymap problem. (The Pi defaults to GB keyboard layout – Australia uses the US layout, so hash, dollar, don’t do what is expected. And it’s been over 11 years since I used a British keyboard.)

See the locale page on the Debian Wiki for details of how to fine-tune locales.

Configure Default Editor

As far as I’m concerned, There is No Editor But vi. I use vim, which might not be installed, so installing it first might be a Good Move.

sudo apt-get install vim
sudo update-alternatives --config editor

What’s The Time?

ntpq -crv

Hopefully ntpd has been installed automatically, and is up and running. Using public NTP servers, the stratum entry in this list should be 3. If ntpq throws any errors, try again with sudo. If there is still an issue, ntpd might need to be installed. To check who your ntp peers are:

ntpq -cpe

Configure your timezone:

sudo dpkg-reconfigure tzdata

Enabling sshd (Raspberry Pi)

The Raspberry Pi comes with sshd disabled. To get it working:

sudo cp /boot/boot_enable_ssh.rc /boot/boot.rc

Don’t forget to edit /etc/ssh/sshd_config to set appropriate security options.

Installing Packages

This is my default package set:

apt-get install dns-browse bzip2 links lynx apache2 subversion php5 php5-cli
libapache2-mod-php5 mysql-server mysql-client libmysqlclient15-dev
automake autoconf make gcc g++ gdb bison flex libtool postfix
expat libexpat1-dev libssl-dev libxml2 libxml2-dev libapache2-svn
imagemagick libmagick++10 libmagick10 ghostscript patch unzip

Apache users may need to enable what I consider to be essential modules:

a2enmod rewrite
a2enmod ssl
a2enmod headers

And here are a few security essentials to pop at the bottom of /etc/apache2/apache2.conf:

# You'll want this for PCI/DSS compliance:
ServerSignature Off
ServerTokens Prod
TraceEnable off

# Drop the Range header when more than 5 ranges.
# CVE-2011-3192
SetEnvIf Range (,.*?){5,} bad-range=1
RequestHeader unset Range env=bad-range
RequestHeader unset Request-Range

# Don’t let people see your subversion stuff:
<LocationMatch .svn>
Order allow,deny
Deny from all

More Security Stuff

Recommended for PCI/DSS compliance, you’ll want this in /etc/sysctl.conf:

net.ipv4.tcp_timestamps = 0

And ditto for IPv6, if you have it configured. (I assume.)

You did check your /etc/ssh/sshd_config, didn’t you?

If you have an Internet-facing system, I will say just one word: iptables. And ip6tables, if you’re cool and appreciate just how good hexadecimal addresses look.

Motorola Xoom: Discoveries, Disappointments, Delights


photo of Xoom in typing position
Typing Position

Back in my happy Nokia N900-using days, I realised that the days of Maemo/Meego were numbered, and that I would need to migrate to another platform – most likely Android. To get a feel for Android, and see whether I would be able to get on with it, I acquired a cheap, Chinese, 7 inch tablet, called rather amusingly, a "Haipad."

All along, I had reservations about using a touch screen, as opposed to a physical keyboard. However, after finding decent keyboard software (Swiftkey X,) I was surprised to find how well I got on with it on a trip away. This, upon the demise of my N900, led to my acquisition of an Android phone.

As part of an ongoing experiment to see just how much I could use mobile devices in preference to laptops and larger computing platforms, I found that I was quite happy to use the phone for short e-mails (and I mean short,) looking things up on Google and Wikipedia, and other minor tasks. However, due to such a large number of web sites failing to accommodate the needs of mobile users, and the general awkwardness of typing on a tiny, on-screen keyboard, the tablet format still seemed preferable for more than quick and casual use.

Impressed with my Samsung phone, I was on the verge of purchasing a 10 inch GalaxyTab, until I discovered what I considered to be some highly undesirable characteristics, namely no standard USB connector, no facility for an SD card. That, coupled with the fact that the device is only 8mm thick – which I foresaw would be less than ideal for my large and clumsy hands, made me do a comparison with the Motorola Xoom, which I could obtain on a data contract from Singtel Optus for very slightly less.

After deliberating the matter overnight, I rang Optus in the morning to order the Xoom, hoping that I had made the right decision.

Fragile, handle with care

photo of Xoom in lectern position
"Lectern" Position

Whilst the Xoom weighs about three quarters of a kilo, as opposed to just over half a kilo of the GalaxyTab, it's still a fragile little beast, especially in the hands of the fragile, but not so little, beast that is yours truly. I have a nasty habit of Dropping Things, and Banging Things Against Other Things that would give any device insufficiently robust the sort of life expectancy generally associated with, say, mayflies.

Again, from the experience with my Samsung phone, I looked to Otter Box for a case. Although the polycarbonate and silicone rubber Defender Series case takes the weight of the tablet to over a kilo, and gives it a certain military look, the thoughtful design and features of this case far outweigh any downside that I can see.

The case comes in two sections, one in which the tablet is permanently embedded, the other forming a lid when used one way, and a stand that allows for flat, typing angle and what I would call lectern use. I find the typing angle very good indeed, whether used on a table, or on the lap.

I feel quite confident about chucking the whole thing in my small rucksack when I am out and about. If someone were to try to snatch it off me, the weight and robustness would allow a quick tap to the head to fell any attacker up to and including a medium-sized rhinoceros. OK, it's not that heavy and robust, but I am sure that you get my gist.

The Tablet

photo of Xoom, case closed
Case closed, looking distressingly like a speed camera.
Ruler shown for scale.

So what about the Xoom itself?

First impressions were of confusion, nothing to do with the hardware, but this being my first encounter with tablet-optimised Android (3.1), all previous experience being with the early phone versions, which just happen to have been deployed on tablets. There are actually less physical keys than I am used to (my phone doesn't have physical home/back keys, but it does have a dedicated area of the touch screen.)

After the culture-shock, and some oddities with getting the thing set up (reboot required,) I have had no real problems with the interface.

Bad Stuff

Those things that I do not like about the Xoom were very conveniently forgotten in the sales blurb – and didn't even crop up in any of the reviews that I read, although reviewers seem to be obsessed with games and watching movies, and possibly don't realise that people use them for work on-the-move, so maybe I should not be surprised.

I was put off the GalaxyTab by (amongst other things) the need to carry Yet Another Charger. It was thus with great annoyance that I discovered that the Xoom also requires Yet Another Charger and cannot be charged via USB. Whilst it's not a proprietary connector like the GalaxyTab (perhaps that's why Apple was trying to block sale of the product in Australia,) it does mean that I have to find space for, yes, Yet Another Charger. I appreciate that the current available through USB is limited, but a USB slow-charge option against the wall-wart fast-charge would have been desirable.

The second – and so far final – annoyance is the SD card slot, although this could possibly be broken down into two annoyances. Firstly, it is a combined SIM and SD slot, so you can't take out the SD card without taking out the SIM. My little Haipad gets it right, from my perspective – the SD card goes into one of those little pop-out slots, just like on a laptop. But far more annoying is the information in the quick-start guide that advises that the SD card slot does not work. What? Suspicious that this would be a firmware issue, rather than every device being sent out defective or with missing components, I did a little research and found that – like some of the annoyances with my phone – it can be cured by rooting the device. (The SD slot is enabled with a custom kernel – as far as I understand, there is a kernel module either missing or disabled.)

Good Stuff

photo of Xoom, showing this page in browser
Recursion. This page on the Xoom.

The thing that most excited me about the Xoom (and I am not normally one to get excited) was something else that I saw neither in the sales blurb, nor the reviews, but which I consider to be a very important feature indeed. Had I known about it, I would not have even looked at other products. The Xoom has device encryption. I can find very little technical detail on this – knowing the algorithm would be nice – other than the fact that is uses the regular Linux dm-crypt.

I have long held that, if a device is taken out of the secure (for a given value of secure) environment of office, house, etcetera, it should either be encrypted, or contain no sensitive data. (I consider the loss, by whatever means, of a laptop/tablet/phone containing unencrypted client or other commercial data to be culpable negligence.) This has reduced what I do with mobile devices, and certainly reduced convenience/ease of use as I would, for example, never let a web browser remember passwords on an unsecured device. Whilst I doubt that the Xoom has the same grade of security of Blackberry devices, I certainly feel comfortable using it in ways which I would never have considered on an unencrypted device.

It's got a nice, big, screen. Relatively speaking. Certainly a step up from my 7 inch tablet, and a quantum leap from my 4 inch phone. Which means I can use Better Terminal Emulator Pro to do what I consider an essential away-from-office task, which is ssh into and administer my servers. Note that I was able to do this on my Nokia N900, but the screen and keyboard size made it possible – but painful.

The Xoom has a notification light – something that I really miss on my Samsung phone.

There is little more I can say about the device itself as I have had it for less than a week, and most of the functionality that I enjoy is down to the applications, rather than the hardware. It certainly suits me well.

The Optus Experience

I am a great believer in redundancy. Whilst my phone contracts are with Vodafone, I carry a spare (Motorola RAZR V3i) GSM phone with a Telstra prepaid SIM. This means that if the Vodafone network is out of range – or goes down – I can still communicate through another carrier's network.

I have done likewise with my data. As I don't really like dealing with Telstra, I decided to get my data redundancy through Singtel Optus, the other of the three major carriers in Australia. (They generally drop the Singtel bit – I guess that being an obvious part of Singapore Telecom doesn't look too good on an Aussie brand. I will, therefore, just refer to them as Optus.)

My previous experience with Optus was when I was looking to move my main mobile account away from Telstra. I e-mailed the address given on the web site – and never received a reply. Since that time, Optus has raised its game oh, so much. Due to our local council being a bit, er, just "er", our street address cannot be validated against the national gazetteer. The upshot of this is that I am unable to place online orders with the larger companies like telcos, because my address is invalid. Another issue for me is that I want to use this tablet overseas, with a local SIM. That means I need it to have no carrier locks.

With these two points of enquiry, I contacted Optus via Twitter. The next day, I had a reply with a link to a social media contact form. I submitted my enquiry through the form and, the next day, had a message on my voicemail saying that it was being looked at. The day after, I received an e-mail saying that I could get online special pricing if I called their Sales Support line and that, after I had received the tablet (I was talking about the GalaxyTab, back at that stage,) I should contact them again through the form, to get it unlocked at no cost. Which was quite a surprise.

I phoned the number and, without the expected, interminable time on hold, had my order – complete with dodgy address – dealt with in a friendly and helpful manner. The tablet arrived two working days later and my subsequent experiences Sales Support, getting an ETA (only to find that it was sitting at the local Post Office), and activating the SIM, were to the same high standard.

So, all-in-all, a very positive, pleasant, customer service experience. But, to crown it all, I discovered that the Xoom had neither network lock (so it didn't need removing) nor ghastly Optus branding. It arrived as a stock standard machine. Which is good. (I had to root my Samsung phone before I could remove the Vodafone bloatware that was on it.)


From my experience so far, I would recommend the Xoom/Otter Box combination for anyone who wants a relatively secure, robust tablet – and doesn't mind a bit of weight. If you are comfortable with rooting (and thus voiding your warranty and running a small risk of trashing the device) the Xoom, you can also get a working SD slot – although I have yet to try this myself.

Also from experience so far – and we are only talking just over a week – I have been most pleased with my dealings with Optus.

Samsung Galaxy S II – The Bad and the Good

Things Break

I have a long history of destroying mobile phones, often in unusual and amusing (although expensive) ways. It got to the point where I had to go to something physically robust, which is why, five years ago, I moved to the
Motorola RAZR. Sufficient to say, whilst this phone was indeed physically robust, it did not survive my wife putting it through the washing machine. My second RAZR, however, is still alive and well, and equipped with a Telstra pre-paid
SIM – for emergencies.

I can't recall exactly why I abandoned the RAZR in favour of a Nokia N900, but a
pocket-sized tablet computer running a variant of Debian Linux was something
an inveterate UNIX user like myself could not resist. Note, I refer to the N900 as a tablet, because this is how it was actually sold. A tablet computer that
could do telephony, rather than a phone that could do computing.

With my track record of destruction, I handled the N900 very carefully, although I did drop it a couple of times without damage. That was until three weeks ago today, when I dropped it face down, outside, onto a rock. I am still unsure of the exact damage, but I do know that the display was destroyed. Whilst I plan to resurrect this old friend, after scouring eBay and some deep soul-searching, I decided that it was time to move on – especially since I could
obtain a replacement phone from my carrier, Vodafone Australia, in a couple of days.

After a quick "what shall I do?" on Twitter, a couple of personal
recommendations for the Samsung Galaxy S II resulted in me reading a very
attractive technical specification. I was on the phone to Vodafone within the hour, and in possession of the new phone within two working days. (Would probably have been quicker, had I not dropped the N900 on a Friday.)

The Bad Stuff

Why am I listing the bad points first? Because I want to get them out of the way. Whilst there are annoyances, I can give a specific list. With the good points, I just keep finding them. So here's what I don't like:

  • It's too thin. Rather, it's too thin for large, middle-aged, and slightly stiff fingers. I was actually struggling to pick the phone up from a flat surface, as there is so little to get a grip on. This problem was overcome quickly though, as I put it in a temporary, cheap, silicone case. Now in a decent,
    Otter Box case, the thinness of the device itself is a non-issue.
  • Vodafone has kindly pre-installed a load of bloatware that I am unable to remove. Whilst I cannot blame this on Samsung, it is an issue that comes with the phone. Getting rid of this unwelcome software requires what is known as rooting the phone – replacing the provided Linux kernel with one that allows the user much more control over the device.
  • The camera appears to have two flash modes – overflashed and off. I will discuss the camera in more detail in the next section.
  • Changing the SD card necessitates removal of the rear cover and the battery and, of course, any after-market case. Compared with the hot-swap SD card on
    my Android tablet, this is something of a let-down.
  • I have saved the worst for last. When you plug the phone in to charge, it makes a loud beep. When the phone finishes charging, it also makes a loud beep. These noises cannot be turned off. Now, like many people, I charge my phone overnight, beside the bed. Being woken up in the middle of the night to be told that one's phone has finished charging is irksome, to say the least. The only solution to this "feature" is to root the phone. So I have to void
    warranty, just to stop damn stupid noises.

So, five problems, one already addressed, two require rooting to resolve, the camera flash issue might just go away with a suitable firmware upgrade, and the SD card? Well, I don't need to get at it that often.

The Good Stuff

  • It's fast. After the N900, oh boy, is it fast? Refreshing my IMAP mail
    takes a second or so – the N900 could easily take a minute. Web pages load faster, everything is just, well, fast.
  • I must stress again that the N900 is a tablet that does telephony – but using the Samsung as a phone is a breath of fresh air after very quirky
    telephone software of the Nokia.
  • Barring the flash, the camera is really excellent and takes much sharper pictures than I would anticipate from something without a "e;proper"e; (read: big) lens. With the flash set to off all the time, I have taken some
    good low-light pictures. Whilst these pictures have a heavy colour-cast,
    I am sure that some fiddling with the white balance can correct this. Or even
    correct it in post-production.
  • Large screen. I had been holding off moving to an Android smartphone until I saw one that had a screen the size of that of the HTC Desire HD, and had
    a physical keyboard. Being a touch-typist with large-ish fingers, I have
    always struggled with on-screen keyboards due both to the small size, and
    not being able to feel where my fingers are. However, on-screen keyboards
    became far more usable for me when I discovered
    SwiftKeyX, a replacement for the
    Android keyboard. Having used this with considerable success on my Android
    tablet, I figured that maybe I could use a phone with no physical keyboard,
    especially as I could always use a BlueTooth keyboard if I found myself
    struggling. I am delighted to say that, with the help of SwiftKeyX and about
    four and a quarter inches of screen diagonal, I have been able to operate the Samsung quite comfortably. The large screen also makes it less of a struggle
    to view web sites – at least the ones that are styled to be viewable on
    smaller screens.
  • Android. I am far from being an Android "fanboi", indeed having a
    deep suspicion of anything originating at Google. However, being a mainstream
    operating system – which the Nokia's Maemo never was – means that there
    is a profusion of good applications available (I will document my essential
    Android applications in another post) and a huge user-base, meaning that
    support should be easy to find.
  • Easy to root. I haven't rooted the phone yet, as the required tools
    either require a Windows computer (I run Linux on the desktop – the tool
    crashes my Windows virtual machine,) or Heimdal, which should run on Linux, won't talk to my specific device. These may be issues for me, but my research suggests that this device is actually one of the easiest onto which a rooted kernel may be loaded. The process certainly looks simple.
  • Stuff just works. My previous experience of Android devices is with
    tablets – devices that are not phones running a phone operating system. This
    can lead to a certain amount of quirkiness, and a requirement for
    considerably more technical knowledge than should be necessary to use
    a consumer device. I would also extend the latter observation to
    desktop operating systems. Stuff should just work, in an intuitive fashion.
    Mobile operating systems appear to be showing the way in consumer
    computing. Native applications (or 'apps,' if you insist,) are far more
    oriented to task completion which, at the end of the day, is what consumer
    interaction with computers should be about.


In my third week of working with the Galaxy S II, I would say that I am very happy with it. Once I have rooted it, I think things will get that much better. Especially if I can go back to charging it at night.

Finding Ada 2010: Kate Lundy

March the 24th was Ada Lovelace Day. I am running slightly late, but would like to give brief mention to Canberra's (Australia) Senator Kate Lundy, as my nomination for ALD recognition.

Kate Lundy (Twitter: @katelundy) is not a professional technologist, but a technically well-informed politician and Open Government advocate to boot.

Other than just being an IT-savvy politician, Kate is also a voice of sanity when it comes to the issue of mandatory Internet filtering/censorship in Australia. It is for this positive political influence on technology for which I feel that Kate should receive recognition. (I am sure that I am not the only one who wishes that she had the Communications portfolio – something she seems far better qualified to handle than the current incumbent.)

When I say that Kate is well-informed, I should advise that some of that well-informedness comes from her political adviser, Pia Waugh (Twitter: @piawaugh,) one of Australia's most influential Open Source advocates. So I should probably slip Pia in as an ALD nominee too, whilst I'm at it 😉

Technology: Business Asset or Business Risk?

Risky Business

Everything we do, every day, has an element of risk. This is equally true in business as in other aspects of life. Whilst we may be aware of the risks inherent in driving to work, we are often unaware of risks involved in our work – not the regular health & safety risks – but more subtle risks to the business itself. Decisions we make in our use of technology assets generate risks, risks that might go unnoticed but could have a devastating impact on our business, should things go wrong. [And thus on the businesses of the clients that rely on us too; always remember that.]

This is a fairly long article, but I make no apology for this: business risk is a very serious matter. It could be worse: given the subject matter and my years in IT/network management, this could have been a very long article.

Seek, and Ye Shall Find

The process of identifying risks and their potential impacts is known as risk assessment. Risk assessments can be carried out by expensive consultants – or by anyone able to apply a little logical thinking and common sense. (When issues are complex or large amounts of money are at stake, it may be well to consider the expensive consultant route.)

For the purposes of this discussion, I am suggesting that we should list all the technologies that we use in our business and do a risk assessment on them. For each item we need to start by asking two questions:

The Yes/No Question

If this technology were to suddenly become unavailable, for whatever reason, would it affect my ability to do business?

The Quantity Question

Should the previous question yield an answer of ‘yes,’ for how long would I be able to work without this technology before its absence became a serious problem?

Write It Down!

Forewarned is forearmed. When undertaking a risk assessment, findings, plans of action, whom to call, etcetera, should all be documented. There is little point going through the exercise, having a risk become an incident and then finding that nobody can remember what is supposed to happen next.

In the following sections, I will run through a list of what I consider to be critical technologies, although not all will apply to all businesses. This list is not intended to be exhaustive but exists to give readers a starting point in performing risk assessments of the technology in their specific businesses.

The Telephone

Whilst there may be businesses out there that still do not have a computer (I have visions of people sitting at high desks, wearing fingerless gloves and half-moon glasses, writing with quill-pens in heavy ledgers,) very few will not have a telephone.

The Telephone Yes/No Question

As regards telephones, I cannot see the Yes/No question ever returning a ‘no.’ I make very little use of the telephone myself but it is an essential tool for When Things Go Wrong. Anyone who thinks that their business would not be affected by the loss of a telephone service should be asking exactly how they intend to call the fire service when their premises are burning down.

The Telephone Quantity Question

How long a business can operate effectively without a telephone depends on the nature of that business. I would not be comfortable knowing that I had no telephone service for over, say, one hour; the next thing to go might be my Internet connection – how would I call my ISP?

For any business where the telephone is a major means of communication with clients, any downtime is bad.

The Telephone – Discussion


As we are starting off by looking at one of the most mature of technologies in use, let’s consider first the most mature of telephone technologies: the landline. As there may be businesses that do not have computers, there may be businesses that do not have mobile telephones. Strive not to be one of these because you need some means to call for service when the landline stops working. (Anyone thinking “oh, but we’ve got 10 lines” should be made aware that a backhoe can take out a 40-pair cable just as easily as a 4-pair cable.)


If the business in question has a PABX, it should have a service contract for it. (Please tell me it has a service contract!) The answer to our Quantity Question should be used when negotiating the guaranteed response time for the service contract. If the answer is zero time, the minimum response time should be chosen.

–> Important Bit <–

Or should it? If the contract cost with the minimum response time sounds a bit steep, a little more thought is required. The cost of the outage (loss of business, etcetera,) should be weighed against the cost of the contract. Customer expectations should also be borne in mind as a part of this process. This is an important decision for the business owner and should not be undertaken lightly. This decision-making process applies not just to PABX service contracts but all business technology service contracts and Service Level Agreements (SLAs) for online services such as web hosting, too.

Final Word on PABXs

Things may be different nowadays, especially if the telephone service is provided over fibre; however, traditional PABXs used to have ports for ordinary, analogue, handsets to be plugged in to provide a service in the even of power failure. If you have a PABX, find out if it has such a port(s) and get a handset connected for emergencies if one is not already fitted.


Landline hansets tend to be rather hard to lose and are reasonably robust (decent business handsets, at any rate.) Mobile handsets, on the other hand, are horribly easy both to lose and to break. I have two pieces of advice for the mobile ‘phone user to help mitigate risk:

  • Buy a USB SIM card adapter and software. These are very cheap and allow the contents of the SIM card to be backed up to a computer. Make backups regularly, especially if you add new numbers to your phone book on a regular basis. (Make sure that numbers are always saved to SIM, not to phone.)
  • Have a cheap, spare, handset that you can put your SIM card into in the event of the phone taking a tumble, a ride in the washing machine, or whatever. My SIM has survived the death of several handsets, including Death by Washing. My spare handset has a pay-as-you-go SIM card in it; should the main handset be lost or stolen, I can still make calls.

I know very little about smartphones and do not aspire to own one. However, a smartphone is a just a portable computing platform. Computers should be backed up. Check with your vendor to find out how.

Computer Hardware

Computer Hardware Yes/No Question

After some consideration, I an unable to think of a scenario where a business has a computer or computers but can work quite happily without them. On the strength that anyone reading this article is doing so using a computer (rather than have a secretary print off a hard copy to avoid touching that Devil Machine,) I will, as with the telephone, assume that we will be looking at a ‘yes’ response here.

Computer Hardware Quantity Question

This question is where I would expect to see a bit more variance in answers. A business that only uses a computer to run accounts once a week would probably be somewhat more comfortable with an outage than, say, myself. (I am a developer; no computer = no work. It takes a genius like the late but amazing Ada Lovelace to write software before the computer has even been built.)

As the computer is such a fundamental and critical component of my business, I will detail what I do to keep myself in operation.

Computer Hardware – Discussion

If the computer is a key tool in a business, the simple fact is that a spare should be available or some guaranteed means of laying hands on another one quickly. Not only does the spare machine need to be available quickly, it also needs to be ready to do what the regular one does (or did in the event of a failure) – any software used should be installed, it should be set up to work with the office network, etcetera.

Desktop Machines

Thinking about desktop machines, if someone in the organisation is any good with hardware, a set of spares can be carried for emergency repairs. (If several computers are involved, it helps if they are the same make/model or at least that spares are interchangeable.) A spare power supply and hard disc should be carried at the very least. The simplest approach, however, is to have an entire machine into which we can swap the hard disc (assuming this hasn’t died) from a defunct machine, or cannibalise for parts. (Also consider having a spare keyboard, mouse, monitor to hand – although most businesses seem to accumulate these in the course of upgrades.)

Where is the data used by the desktop machine stored? If it is on a server and the user has been disciplined to not save files to the local disc, swapping the machine out with another pre-loaded with the required software should be quick and simple. If, however, files are stored on the local machine a second, mirrored, hard disc (RAID 1) should always be employed if the machine is mission-critical.

Note that repairs/replacement could be effected by someone outside the business if they were known to be able to attend quickly. However, consideration should always be given to the fact that the critical person may not be available due to whatever reason. Contingency plans should always be made to cover this eventuality.


Laptops are far less easy to repair than desktops. Keeping just-in-case spare parts is far more expensive than for their desktop brethren. Furthermore, laptops are easy to drop, steal, spill coffee in (far worse than spilling coffee on a desktop keyboard,) and generally give a hard time.

If, like me, the primary machine is a laptop, a spare is needed. This is probably the point where some readers will be saying “argh, expensive! I can’t afford that!” I would ask those readers to put a cost on the work that they will not be able to do without the spare.

The spare laptop need not be the same as the main one; it just needs to have the same software installed and be configured in a compatible manner. It can be clunky and slow so long as it is up to the task. I run a large, desktop-replacement ThinkPad as my primary. It does a great job, but is only portable in a fairly loose sense of the word. My secondary/backup is a little Vaio; it has a somewhat smaller screen but is very portable. It was also quite cheap.

Only one laptop ever leaves the house – the Vaio. As this puts it into Getting Stolen risk category, the hard disc is completely encrypted. (My machines hold sensitive client data; I have a duty of care to my clients to ensure that their data never ends up where it shouldn’t.) When at home, I keep the two machines synchronised after every file save. (I do this using version management software – a topic which exceeds the scope of this article but which I mention for the sake of those who might be curious and wish to investigate further.) So, when coffee hits keyboard, ignoring the repair bill, things are not so disastrous.

Oh, and a spare for a laptop can always be a desktop; it might prove a bit tricky to go walkabout with it though. If portability is not an issue, it could save a few $$$.

Networking Gear

I have experienced about as many failures of networking equipment – modems, routers, hubs/switches – as I have actual computers. As with computers, carry spares. If your business has a $5,000 managed hub, have a little $70 to tide over essential services when it goes “pfft!” I have a spare Ethernet switch to hand (an old one that I upgraded) and a ready-configured ADSL router/wireless access point. Total cost: $150.

Note that network cables tend to suffer all sorts of abuse – having a couple of spare in the drawer could just help save the day.


My approach in the Computer Hardware section has assumed small to medium businesses which look after their own hardware requirements. An alternative, especially when dealing with expensive servers, is to have a maintenance contract. Maintenance contracts are just as much for sole traders as they are for large corporates. My points made in the PABX section regarding response times/SLAs apply in this context too.

With computer hardware services, there are a large number of fly-by-night operators (they exist in the telecomms sector, too.) Anyone considering a contract should look carefully at who will be delivering the service. My inclination would be to buy only from the Big Names such as Dell, IBM, HP/Compaq, Sun if any form of maintenance contract is required.

For those who particularly want to deal with a smaller operator, go ahead – but ensure that second and third smaller operators are also identified for when the first choice cannot/does not deliver.

What About Apple?

I am not an Apple user (apart from my iPod;) this section was written with PCs in mind but all concepts still apply. Vendors should be consulted regarding maintenance contracts and the like.

Network Services

In this section I will be discussing that all important tool, the Internet connection, along with e-mail, web hosting and this thing they call The Cloud. Now, I’ve already given two examples about the Yes/No question and the Quantity Question; for this section I will leave these as an exercise for the reader
and launch straight into some critical network services, the risks and how
they might be mitigated.

Internet Connection

Readers may have noticed a theme through the discussion so far – critical technologies require some form of backup. (Readers who have not noticed this are invited to have another coffee before re-reading this article 😉) Internet connections – if mission-critical – should have some form of backup just like all the other technologies mentioned so far. Assuming that the main Internet connection is coming in over a telephone line – either ADSL or a private pair (older technology) – mobile broadband makes a logical backup solution. However, there are limitations:

  • Mobile broadband is not available everywhere
  • Mobile broadband can be slow (it hardly deserves the epiphet ‘broadband’)
  • It might not be possible to plug it straight into an existing network (some routers can accommodate this though)

My advice with regards to backing up Internet connections for those of a non-technical nature is simple: talk to the ISP providing the main service. If this ISP cannot assist with a backup service, it may be worthwhile shopping around for another ISP that can.


There are many different types of e-mail service (Amanda Gonzalez has written this simple guide at Flying Solo,) each with its own risks. The three main risks that an e-mail system presents are:

  1. Not being able to send/receive e-mails
  2. Losing sent/received e-mails
  3. Losing address books

A few tips/points regarding e-mail:

  • The safest e-mail service is probably a hosted one where availability of backups and an SLA are guaranteed by contract.
  • Personally, I like IMAP; I run (and back up) my own mail servers. My entire IMAP folder structure is copied to a second server in my office and also a server in the USA on a daily basis. IMAP also makes it convenient in that I can access my mail from either laptop at any time.
  • The risk of data loss with POP may be mitigated by backing up the appropriate folder(s) on the computer used to access mail on a regular (daily or greater) basis.
  • Unless using an enterprise mail system (GroupWise, Exchange, etcetera) where address books are a server function, address books for IMAP/POP mail clients need to be backed up.
  • Free e-mail services can provide a handy secondary/backup for regular e-mail services. Address books from primary services should be synchronised to secondary services on a regular basis.
  • I would discourage the use of any free e-mail services for mission-critical applications. When paying for a service, the provider has a contractual obligation to make sure that things work; with free services, it is a gamble. (I have seen enough instances of outages, compromised (hacked) systems and user data loss in free e-mail services to recommend them only as secondary/backup systems.)

Web Hosting

Here are a few points to consider when assessing the risks of web hosting:

  • SLA – 99.99% guaranteed uptime sounds great. But is that per year or per month? Lose a 9 there and that’s just under 9 hours in a year. Examine these figures very carefully.
  • Hosting providers (especially the cheaper ones) often perform scheduled maintenance without warning customers. How critical is uptime – is this an issue?
  • Overseas hosting providers often perform scheduled maintenance during the night – which might be in the middle of business hours elsewhere. Could this present an issue?
  • If a hosting provider is also handling DNS and/or registration for a domain, it may be very hard to move to another provider in the even of the first provider going broke (doing a runner, turning ‘funny,’ etcetera; I’ve heard them all.)
  • Always have a hosting contingency plan should it prove necessary to move a site in an emergency.
  • Remember that ftp is not a secure protocol. Personally, I would not use a hosting provider that used ftp with plaintext user name/password logins for any site that handled sensitive (personal, financial) data. ftps (encrypted ftp) should really be the minimum standard.

The Cloud

Readers are likely to have been hearing much buzz of late regarding ‘The Cloud.’ The main thing to understand about Cloud Computing is that, rather than having software installed on my computer, I run software on another computer (or computers) somewhere else.

It is at this at this point that I should disclose that I am a self-confessed Cloud Skeptic. Whilst I can see the many benefits and possibilities of Cloud Computing, I am very much aware of the risks that come with this technology and which need to be addressed before the business world becomes over-reliant on it.

Web Applications – There Rather Than Somewhere

Here I am, a web applications developer, saying that The Cloud is risky. Is this not an odd thing to do? No – and for two reasons:

  1. I constantly analyse the risks of my own business
  2. I make a distinction between the applications I write and host in known physical locations with applications running somewhere (anywhere.) I run Virtual Private Servers (VPS) for myself and my clients; these are located in data centres I have specified. If I were to ask my provider, they could even send me a photo of the physical machines the VPSs are running on. With a Cloud-hosted application, I just have to be content with it running ‘somewhere.’

My concern over Cloud-hosted applications is that there the systems required to produce server instances ‘somewhere’ are by far more complex (and immature – and I’ll cop some flack for saying this) than those required to deliver a Virtual Machine on that computer over there. –> *points*

Internet Connection

No, this is not an inadvertent copy and paste from earlier on in this article. If I run software – say a word-processing package – on my computer and my Internet connection fails, I can carry on using it. However, if my word-processing package is actually running as a service somewhere in Cloud-Land, whoops – it’s gone. The Internet connection thus becomes the weakest link in the business for which provision needs to be made accordingly – such as a means of being able to work offline.

Use The Cloud, by all means – just be prepared.


If all that technical detail has readers reeling, not to worry! I will now summarise the entire article in three bullet-points:

  • Technologies on which a business relies present risks.
  • For each technology used by a business, an assessment should be made as to whether it presents a business risk and, if so, to what degree.
  • Action should be taken for each identified risk which may include:
    1. Acquiring backup equipment
    2. Taking out support contracts
    3. Identifying alternative vendors
    4. Documenting plans on how to respond to a risk becoming an incident

Other Stuff

Likely as not, if looking at business risks for the first time, readers might be starting to think that they extend far beyond the technology risks I have discussed. I will, therefore, leave you with some further avenues of thought:

  • Infrastructure – power, water.
  • Premises – where to relocate?
  • Key staff – should more than one person understand their role?
  • Work vehicles – alternatives when off the road?
  • Zombie attack; seriously. Zombies only exist in the movies (and my office, before my first espresso,) but analysing the risks of a hypothetical, if fictional, scenario may identify gaps elsewhere.

Phew, finished! It’s a lot easier to do risk assessments than to tell other people how to do them. Hey, wait, is this thing still recording?

Extensible Metadata for Your CMS


I am a metadata enthusiast, especially when it comes to Dublin Core. When it comes to the Web, I don't just want to see metadata for pages, I want to see metadata that conforms to a formal vocabulary (like Dublinc Core.) A quick read of my article Metadata, Meta Tags, Meta What? may help the reader get up to speed on this.

Content Management Systems (CMS) can provide a perfect framework for the creation, maintenance and presentation of metadata. Unfortunately, for most CMS software, this functionality is limited – often to informal, 'legacy,' terms – if it exists at all.

In my ideal world, a CMS would provide a ready-to-use means of associating Dublin Core metadata with all pages and be extensible so that the vocabulary could be extended or extra vocabularies added. Compared with some CMS functionality, this is not something that is difficult to achieve so I can only assume that the general lack of implementation speaks a total lack of interest in metadata on the part of the CMS developers.

Some time ago, I presented a set of notes on how to achieve this to a developer working on the Mambo CMS. This work never came to anything at the time as the project forked shortly thereafter and said developer left the project. Subsequent to this, I started working through my notes to produce an extensible metadata extension for the Drupal CMS and also described a toolkit that could be used to work with other CMS. Due to ill-health and lack of time, neither of these bore fruit.

The only progress I have really made on this to date has been in advising the developer of mojoPortal on my metadata concepts; a Dublin Core implementation for mojoPortal is being worked on at the time of writing.

Now, some three years on, I will try to make amends through this article by describing my concepts for adding an extensible metadata management system to a CMS.

I will attempt to keep this article as technology-neutral as possible by describing only the SQL table schemata and queries required to implement the system. However, it should be borne in mind that I am writing from a MySQL perspective and that changes may be required if working with other database technologies – especially when it comes to stored procedures.

One assumption that I am making, which is key to the whole concept, is that every page in the CMS is identified internally by a unique integer field. In the Drupal CMS, this would be the Node ID (nid.) If some other system is employed, a lookup table may need to be employed to implement my concepts.

The Simple, Inflexible Approach

To add metadata functionality to our CMS, we first need to extend the database schema. We could do this either by adding new fields to the table where we store our page content or to create a new table where we can store our metadata.

Our extended table or new table can have a column for every term. This keeps queries and management very simple – but is highly inflexible as adding terms would require modification of the table schema and the queries that relate to it. I find this approach somewhat distasteful – using a flat and fixed data structure when we have the power of a relational database to work with.

Key Metadata Concepts: Triples, n-Tuples

Metadata are data describing data. In the Web context, metadata are various pieces of data that describe properties of a page or media object.

In its simplest form, a metadata statement comprises three elements, the thing we are talking about, the property we are describing, and the value of that property. This set-of-three may be described as a triple or 3-tuple.

Consider the following example of the 'legacy' description metadata element:

<meta name="description" content="an article about metadata" />

Do you see the three elements of the triple? No; that's confusing, isn't it? This is because we are presenting the metadata on the page we are describing; the name attribute of the meta element tells us the property we are describing, the content attribute the value of that property; the subject – the thing we are talking about – is implied. (Those who deal in the grammar of human languages may wish to compare this with the concept of an imperative sentence, where the subject is implied rather than expressed. The name and content attributes thus form the predicate of that sentence.)

Now, who says we can only present metadata about a page on that page? Nobody. If we are storing this metadata in our CMS, we can present it elsewhere, such as in an external RDF file. Our store of metadata may be used to create a library-catalogue of our entire site.

For this simple case, where our metadata can be represented by triples, we might create a database table like this to accommodate it. (Note that more detailed descriptions of fields will be given for the "real-life"
schema later in this document.)

Metadata is stored here.

subject - unique ID of page we are describing.

term - refers to metaterms.term; we look up metaterms.termname
to find the value that goes in the name attribute of the meta

termvalue - what goes in the content attribute of the meta element.
create table metadata
subject int unsigned not null,
term int unsigned not null,
unique index(subject,term),
termvalue text

Terms are stored here.
create table metaterms
term int unsigned not null auto_increment primary key,
term_name varchar(64)

Set up some terms.
insert into metaterms (term_name) values ('description'), ('keywords');

Now create description and keywords records for our page which
has unique ID of 1.
insert into metadata (subject, term, termvalue) values
(1,1,'an article about metadata'),(1,2,'metadata; Dublin Core; blah blah;');

See? All nice and simple for triples.

Dublin Core Complicates the Issue

Let's have a look at a couple of meta elements containing Dublin Core metadata assertions:

<meta name="DC.title" lang="en" content="Extensible Metadata for Your CMS" />
<meta name="DCTERMS.created" scheme="DCTERMS.W3CDTF" content="2009-12-05" />

Our DC.title has an extra property, lang, and DCTERMS.created has an extra
property, scheme. This somewhat complicates matters and means that the triple is no longer capable of holding all the bits we need. We are now moving up in the n-tuple (a triple is a tuple with 3 components, an n-tuple is a tuple with n components) world. Our triple, or 3-tuple, has now become a 6-tuple.

If you are now wondering how I came up with a 6-tuple, let's have a count:

  1. Subject (this page, implied)
  2. Vocabulary – the first part of the name attribute. From our example, this is either DC or DCTERMS.
  3. Term name – the second part of the name attribute.
  4. Scheme
  5. Language
  6. Value of the content attribute.

So, the mysterious extra member of the n-tuple occurs because we are overloading the name attribute of the meta element.

Our database structure just got a bit more complicated. How much more complicated is up to the developer; we can either stand up as purists and use a fully relational model, or we can cheat, simplify things and hope they don't come back to bite us. If we plan things carefully and consider the scenarios in which we are going to use our CMS, hopefully being bitten by the results of Bad Decisions will not be amongst our worries.

The Fully Relational Method

Is actually not quite full relational. I have cheated a little even in this method to make metadata searches a little more efficient. Let's have a look at the new schema of our metadata table:

Metadata Table

create table metadata
subject int unsigned not null,
termid int unsigned not null,
scheme int unsigned not null,
lang char(8) not null,
termvalue text,

Metadata Table Fields

The unique ID of the page in question (eg: nid for Drupal.)
foreign key – refers to the primary key of the metaterms table, described below.
foreign key – refers to the primary key of the schemes table, described below.
The language of the content of the meta element (eg: EN-US, FR, DE, etc.)
The actual value of the content attribute of the meta element.

You will note that this table does not have a field to store the vocabulary. This is not necessary as this may be looked up from the metaterms table.

The columns scheme and lang are designated NOT NULL for purposes of indexing.
As values for these are not always present, we would populate these with
0 (zero) and 'NULL' respectively when no values are given. The
software generating the meta element for the HTML document would skip creation of the respective attributes if these defined null values were found.

Metaterms Table

The metaterms table is where we define all the metadata terms that we can use.

create table metaterms
termid int unsigned not null auto_increment primary key,
vocabterm varchar(32) not null,
unique index(vocabterm),
vocab int unsigned,
defscheme int unsigned

Metaterms Table Fields

The primary key for this table.
The value of the name attribute of the meta element. It is here that a bit of "cheating" takes place. You will
recall that the name attribute of the meta element is overloaded by combining both vocabulary and term, as in DC.title. The metaterms table would have a field that contains just the term – at least it would if were doing things nicely. For the sake of efficiency, however, the vocabterm field contains the same vocabulary+term value that appears in the name attribute of the meta element. (The alternative would be to look up the vocabulary [the DC part of DC.title] from the vocabs table.)
foreign key – refers to the primary key of the vocabs table.
foreign key – refers to the primary key of the schemes table; this is the default scheme for this term. If we want our system to be flexible, we should
let the user override this on a per-use basis, if they so wish.

See Appendix A for a dataset that can be used to pre-populate this table.>

Vocabs Table

The vocabs table is where we set up master records for the different vocabularies that will use. One of the key functions of this table is to provide the URIs that should be linked in our HTML document <head></head>.
For a full Dublin Core implementation, these would be:

<link rel="schema.DC" href="" />
<link rel="schema.DCTERMS" href="" />

And here's the schema:

create table vocabs
vocab int unsigned not null primary key,
vocabname varchar(8),
vocaburi varchar(128)

Vocabs Table Fields

The primary key for this table.
This is the first of the values that are joined in the name attribute
of the meta element – the DC of DC.title or DCTERMS of DCTERMS.created.
URI of the schema for this vocabulary, for instance for the DC vocabulary.

Appendix B provides a dataset that can be used to pre-populate this table.

Schemes Table

The schemes table provides a list of possible values that can be used
in the scheme attribute of the meta element.

create table schemes
scheme int unsigned not null auto_increment primary key,
schemename varchar(32) not null,

Schemes Table Fields

The primary key for this table.
The actual value that will appear in the scheme attribute of
the meta element.

Appendix C provides a dataset that can be used to pre-populate this table.

Simple/Cheats' Method

If we are prepared to sacrifice flexibility and accept the default scheme in
the metaterms table as being the only that may be used for each term, we can do away with the schemes table altogether and replace the integer column metaterms.scheme with a varchar column containing that default scheme.

Another option would be to abandon the vocabs table and hard-code the links shown in the vocabs table section into the document template. If additional vocabularies were to be added, any corresponding schema links would also need to be added to the template.

SQL Queries

Whilst the database structure described here should provided what is required to implement a metadata repository for a CMS, I will provide some example queries to help get the ball rolling.

Vocabulary Links

select concat('schema.',vocabname), vocaburi from vocabs
where vocaburi is not null and vocaburi!='';

This will provide values ready to put in the rel and href attributes
of link elements. These links could also be added as static text
to the page template, as described in the Simple/Cheats' Method section.

Retrieving Metadata for a Page

Assuming that our page/node ID is 1234:

select t.vocabterm, s.schemename, m.lang, m.termvalue
from metadata m
join metaterms t on t.termid=m.termid
left join schemes s on s.scheme=m.scheme
where m.subject=1234;

This will return values for the meta element attributes name, scheme, lang, and content respectively. As values for scheme and lang may be NULL, creation
of these attributes should be suppressed if no value is returned for them.

that the schemes table is attached with a left join so that a NULL may be
returned if the value of metadata.scheme=0.
(See Metadata Table Fields.)

Further queries may be added to this section if I think of anything
else that might be useful.


Here is a toolkit, how it is implemented is the choice of the developer. Here are some pointers that may assist.

It may be sufficient for many to implement on Dublin Core metadata. When this is the case, no provision need be made in the CMS for maintaining the
metaterms, vocabs and schemes tables – the values provided in the appendices should provide all that is needed. If another vocabulary were identified that might be useful to a reasonable number of CMS users, this too could be added to the inserts in the appendices and no provision be made for maintaining it through the CMS.

If a form, or section of form in the CMS page maintenance area is provided for adding/maintaining metadata for pages, some fields could be pre-populated. DC.title could take the existing page title (I cannot see any reasonable situation where these would be different;) DC.identifier – the page URI – could be calculated; DCTERMS.created and DCTERMS.modified could certainly be derived automatically; DC.rights could be taken from a site default; DC.type and DC.format would generally be fixed. And the list goes on. Pre-population of fields would make the task of maintaining metadata less onerous and encourage compliance, which may be an issue in some organisations where provision of metadata is mandated.

Search facilities could be built that could identify lists of documents by author (DC.creator,) creation date, etcetera. I created an experimental metadata repository a while back – some four million pseudo-pages, each with three items of metadata. Searches on unique metadata values all completed under a second, much to my surprise. The repository used nearly the same table schemata (including indexing) presented here, so a powerful search engine would not be hard to implement for a CMS holding very large numbers of pages. I am currently unable to find the search queries I used, but will append them to the SQL Queries section, should I come across them at any point.

Sitemaps and other machine-readable (RDF) views of the repository could be generated – either as the results of search queries, or just dumps of the entire repository.


The contents of this document are released under the Creative Commons Attribution 3.0 Unported License. If you make use of the material presented here, I require attribution as a contributor to your work. A link back to this page would be nice, too. Yes, you can use it commercially; if you make heaps of money out of it, I'm rather partial to full-bodied reds. Hint, hint.

If you do make use of this material in your project, I'd love to hear from you and link to your project from this page.


Appendix A

Values for the metaterms table.

insert into metaterms (vocabterm,vocab,defscheme) values

Appendix B

Values for the vocabs table.

insert into vocabs values

Note the inclusion of vocabs HTML and OTHER. I have provided these so that our metadata repository can store the title and doctype of the HTML document (vocab=HTML,) and various 'legacy' metadata terms (vocab=OTHER,) if so required. If these are excluded, the corresponding entries should also be excluded from the end of the insert in Appendix A,

Appendix C

Values for the schemes table.

insert into schemes (schemename) values
('Box'), ('DCMIType'), ('DDC'), ('IMT'),
('ISO3166'), ('ISO639-2'), ('LCC'), ('LCSH'),
('MESH'), ('NLM'), ('Period'), ('Point'),
('RFC1766'), ('RFC3066'), ('TGN'), ('UDC'),
('URI'), ('W3CDTF');

TinyURL for this page: