Category Archives: Technical

All Change on the Desktop (Again)

My ThinkPad and What I Do With It

By way of preamble, the computer I use on a daily basis is a Lenovo ThinkPad Z61m 9451NTM. Whilst this is not the computer that I hoped it would be when I bought it – it has serious overheating issues and seems to have drifted away from the quality of IBM ThinkPads since going over to the new manufacturer – it is quite a powerful machine better considered as a desktop replacement rather than something that one can use on one’s lap. (If the weight doesn’t crush your knees, the overheating will burn them. And other bits.)

At the moment, this machine is running Gentoo Linux with my own customised LAMP stack and the fast, minimal, Fluxbox as my desktop environment. This machine runs all the services that my web servers run plus the GUI desktop environment.

I am not the most demanding user in that I do not deal with graphics, games or anything that requires much in the way of computing power. However, as a developer I run VMware Workstation so that I can test with web user agents running on Windows XP (not to mention running iTunes.) This machine came as standard with 0.5Gb of RAM; I bought an extra 1.0Gb when I got it but had to upgrade to 3Gb to be able to run VMWare guest operating systems reliably and without dragging down performance.

The ThinkPad is normally run dual-screen with xinerama using an Asus VW223U 22″ widescreen monitor as my primary, with the laptop display as the secondary. Input is via a Microsoft (a company I usually avoid!) Natural (ergonomic/split) keyboard and a Logitech TrackMan Marble optical trackball. It should be noted that I actually have to have 2 config files for the X server – one for single and one for dual monitor. Running without the external monitor in dual mode makes the keyboard/internal monitor unusable; I now have a special startx script to ask which configuration I am running before starting the X server.

The supplied 80Gb hard disc proved to be inadequate for my demands (virtual machines can take up a lot of room) after about 6 months. The 160Gb 5400rpm disc I put in as a replacement – copying the old disc straight over – was not the wisest of investments as I have now, a further 18 months down the track, had to acquire a 320Gb 7200rpm disc for my next move. (Note that both replacement discs are Hitachi TravelStars as per the original equipment, to provide design continuity.)

This machine is my “everything” machine. It is the machine I work with, the machine I use for electronic design and would also be the machine I use as a DAW (Digital Audio Workstation) were it not for serious limitations of the Gentoo Linux distribution currently installed.

Greetings to Deb and Ian

Having used Unix-derived operating systems for some 22 years, when I first started working for myself in 2001, I was very quick to dump Windows as my desktop operating system and move to Linux. The distribution in question was SuSE as I had experience with this from my previous life, using it on test servers based on 486DX4-100 machines. For a while I was satisified with SuSE but every upgrade caused more problems and, back in 2006, with my old Toshiba was suffering from loss of a memory socket, became totally unusable. It was then that I migrated to Gentoo Linux.

Gentoo has served me well over the last two-and-a-bit years, both on the desktop and on my web servers. However, it appears that the AMD64 version of Gentoo is far less well maintained than it could be and the maintenance burden of running ebuilds on 3 servers plus the laptop has become unviable. In addition to that, I am unable to run and/or build various audio applications such as Rosegarden, Qtractor, Audacity and others. With the restriction in working (or able to use computer) hours that my health situation enforces on me, I have decided that it is time to go back to binary Linux distributions. I have sworn off RPM-based for life (I ran a RedHat web server once as well as my SuSE boxes) – never, ever, again. Whilst the popular choice of Linux distribution (other than RedHat and the now Novell SuSE) would appear to be Ubuntu, this distribution does not appeal to me as it appears to be aimed (and good on ’em for doing it) at the non-technical market.

My next move will be to Debian, an old, established distribution and that from which the popular Ubuntu is derived.

How to Migrate Without Inconvenience

Many of the tools that I use I wrote are one-offs. They were never designed to work on other machines or outside of my current working environment. My LAMP stack is customised, but I have few scripts (especially for the Perl part) that would enable me to re-install it quickly on a new machine or hard disc.

Migration from Gentoo to Debian by starting with a clean disc could take upwards of a week – time I cannot afford.

My solution, that which I have just started within the last couple of weeks, is to do the migration in stages. This is possible due to the fact that my ThinkPad is able to boot from an external, USB-connected, hard disc. The vendor from whom I obtained the new 320Gb disc happened to be clearing external USB to SATA disc enclosures for $5 AUD a piece. My new disc is mounted in one of these so I am able to dual-boot between my day-to-day environment and the fledgling Debian environment. I even have an entry in the Grub menu of my internal disc to allow me to boot the external one.

Issues So Far

I have only encountered three issues so far with the migration progress:

  1. If I boot from the external disc, I can’t get Grub to boot anything – either the internal or external disc. I can only guess that for some reason the disc IDs are different when booting thus.
  2. Time. I simply don’t have enough at the moment to devote to this work. However, because I can do it bit-by-bit, this is not a significant issue.
  3. Last but not least, the afore-mentioned over-heating issue is the main problem. The only way to get this machine to work – other than in the depths of Winter – is to use a kernel that has the IBM ACPI module and tools and to start the fan running at full speed as soon as boot is complete. As things stand, I think I will have to build an appropriate kernel whilst booted into my regular environment, copy it to the new disc and try from there. Either that or pull the thing apart and hard-wire the fan.

The Future

Once I have a working system and have copied everything across, I will move the external hard disc into the laptop. I will then build myself a realtime kernel and see if I can’t get those audio applications working.

Due to the lack of portability of the this ThinkPad – or at least the inconvenience of portability – I have decided to move to a Netbook for use elsewhere in the house. This will not be an Eeeeeeeeeeeeee or any such beast but an elderly and very slimline ThinkPad (a real one, from IBM.) Lacking any removable media devices or even USB, this machine does have a PCMCIA slot so that I can have WIFI access. I plan to replace the 4Gb hard disc with an adapter and a Compact Flash device (might have to get the soldering iron out if I can’t buy a suitable adapter) and re-stuff the battery.

This little wonder will probably be running the Slackware Linux distribution as I have had success with this before on old hardware. It is a really nice (and light) machine – I look forward to its rejuvanation.

Note: if anyone can suggest a light-weight web browser (user agent) that runs well on a low-resourced system (Flash and other plugins NOT required), please let me know.

Update 1

I have attempted to install my Gentoo kernel onto the Debian disc, but it won’t boot properly.  Now installing the “official” way but booted into Gentoo and then chroot’ed to the mounted Debian disc.  This allows me to apt-get everything I want without having to actually boot into Debian.

Open Source MIDI Control Surface

I am in the process of designing a MIDI control surface so that I can control knob-less synthesisers and also control virtual sliders in Digital Audio Workstation applications like Qtractor, Rosegarden, Ardour, etcetera.

The design is made around an 8-bit Atmel AVR microcontroller, a Texas Instruments ADS7961 16-channel, 8-bit ADC, an array of 16 potentiometers, a cheap 2-line character LCD, some buttons and possibly a numeric keypad, although my current thought on that is ‘bloat’.

Firmware will be written in C, compiled with avr-gcc with the avr-libc C library.

I’m currently struggling over how to avoid sending MIDI ‘noise’ from the pots. I can foresee that the ADC will be picking up changing values without the pots being touched. How to determine whether a change is ‘real’ and needs to be transmitted as a value and what to ignore is the issue. Average a number of samples and then send only if the value has changed from the last average, ignoring any changes smaller than the two LSBs? I don’t know and am open to suggestions.

Software will allow each pot to be assigned a MIDI channel and controller. I’ll probably set it up so that assignments can be saved as programmes which can be recalled depending on what one is controlling.

Once I get to that stage, links to schematics and source code will be posted on this page. This will all be released under a Creative Commons Attribution-Share Alike license. (So yes, you could build this and sell if if you wanted to.)

Anyone interested in this project, please get in touch. There is an e-mail address at the bottom of the page, if you don’t already have one for me.

What About Midibox?

A couple of people have asked me if this is like or had I seen Midibox. To this I would answer not really. This project:

  • Is aimed at producing a very basic/simple/cheap device. Midibox is modular and far more sophisticated.
  • Will be using a different family of Microcontrollers (I’m an AVR man, not a PIC man) although the level of simplicity that I’m looking at should make it fairly easy to adapt to other families such as HC08, 8052 – or PIC.
  • Won’t have an operating system.

So it’s about minimalism, whereas Midibox is about modularity and flexibility (as far as I can see). And the reason that I’m doing this as opposed to re-creating an existing design (like Midibox) is because I want to. I like doing things from scratch.

CPU

I have decided to use an AT90S8515 for development purposes since:

  1. I have worked with these before.
  2. I have an STK500 development board/programmer.
  3. I have a couple of devices kicking around in my office in PDIP packages, which are easy to get probes on when doing diagnostics.

Whilst this device is technically obsolete its replacement, the ATMEGA8515, is compatible and very cheap at approximately $4 AUD from my regular suppliers, Soanar Plus. (This supplier may not have a huge range, but prices of single/low volume items are often considerably cheaper than the likes of RS, Farnell, etcetera.)

 

Web Accessibility Techniques workshop in Adelaide on 20 November 2008

Just passing on this communication from Vision Australia:

Vision Australia is running their popular Web Accessibility Techniques workshop in Adelaide on 20 November 2008.

This full-day workshop run by Vision Australia is targeted at web-development team leaders, corporate communications professionals along with content authors, web programmers and designers and web contract managers. A basic knowledge of HTML is helpful.

This workshop provides a thorough overview of accessibility issues and the techniques used to address them. It covers the World Wide Web Consortium’s Content Accessibility Guidelines and their implementation.

Course outline & registration details here.

sql-o-matic

I have just released the code for my sql-o-matic at Perl Monks.

This is a partial version of a re-write of a system that I have been using for a few years to eliminate much reptitive coding by generating both SQL and Perl code directly from a database schema.

The new version is different from previous ones in two ways:

  1. It runs from the command line rather than as a web application.
  2. Rather than generating MySQL statements that are run from Perl, SQL is provided to create stored procedures.  Perl subroutines to call these stored procedures are also provided.

The code that I have released is not my finished version of sql-o-matic.  There is more work to be done on this and it will not be going on public release since what I will be doing from now on will not be creating universal code, but code that is very specific to my own coding standards and modii operandi.

Nixie Kitchen Timer

Preamble

I have long been dissatisfied with the inflexibility of the humble kitchen timer. The old clockwork ones had an excuse – with a purely mechanical device, one can't just "fix it up in software". Electronic kitchen timers that I have come across tend to have up/down buttons for setting the time and a start/stop button to set the thing running or get it to stop again (and possibly cancel the alarm before he noise drives one mad.)

Whilst most of my cooking doesn't call for timing of any precision, or even of fixed intervals ('cook until it turns brown' rather than 'cook for n minutes'), there are tasks where I do need timing, the main one being batch frying. When preparing my beef'n'buckwheat schnitzels (tenderised slices of topside of beef deep-fried in a spiced buckwheat batter), I cook these one-by-one for 3 minutes a piece, then putting them into the warming cupboard (actually the oven set to about 70 degrees Celsius).

Trying to read 3 minutes of the oven clock is not easy – if I note the time and then wait for that time to read the original time plus 3 minutes, I could be out by nearly a minute either way due to the lack of a seconds display. Not only that, but I have to remember to note the time as soon as I put the piece into the fat and then watch the clock without getting distracted. If I were to use a kitchen timer, I would either have to deal with the imprecision of a mechanical one (not good for short periods) or fiddle around for nearly the full 3 minutes trying to set an electronic one – for every piece that I cook.

Original Solution

Thinking over this issue, I created a set of specifications for my own timer:

  • Work in minutes only (I know the lower end of my 60-times-table well enough) – timer to run from 000-999 minutes
  • Time set by a set of 3 BCD switches; the type under consideration has up/down buttons per digit.
  • Time displayed on an array of 3, multiplexed, 7-segment LEDs
  • Use decimal points of display to indicate position in minute (1 lit=15s, 2 lit=30s, 3 lit=45s); finer resolution not required for my type of cooking.
  • Provide up/down count with audible alarm when down count reaches zero; count direction determined by a toggle switch
  • Audible alarm to run only for a few seconds – digits should flash on alarm condition until stopped
  • Pause/stop push-button pauses count on first press, on second press stops and resets count to values set by BCD switches. Clears alarm condition when pressed after end of countdown.
  • Start button sets timer running, resets count and starts again if pressed when unit running. If unit is in end of count alarm condition, clears alarm, resets and starts running again.
  • On/Off toggle switch isolates unit power
  • Unit controlled by 8-bit microcontroller taking timing signal from 32768Hz crystal. (Probably use Atmel AT90S8535, since I've got one kicking around doing nothing)
  • Unit runs from a 12V wall-wart; logic voltage is provided by an LM2576-5 'Simple Switcher' from National Semiconductor (I got a job lot off eBay recently).

This design would mean that I would only need to set the BCD switches to 003 and press start at the beginning of the process. Each time I put a new piece in the fryer, I would just have to press start again. Simple!

Short-Lived Tubes

A couple of days ago, I spotted some Nixie tubes on eBay – 18 x ??-12 (IN-12 for those whose displays do not support the Cyrillic alphabet) tubes with sockets for an "I can't believe it's so cheap" price. I immediately bought them (before anyone else did) and only then had a look at the specification sheet. These units are only given a rated life of 7500 hours. For intermittent use, this is OK; however, I was going to be putting these into clocks – having the tubes burn out in less than a year makes them less than suitable for such an application. They still look to be nice tubes; socket mount simplifies PCB design considerably as I can just run a 0.01" ribbon cable from the board and split it up at the socket. I also like the profile of these top-reading tubes. What to do with them?

My kitchen timer once again drifted into my mind. Rather than messing around trying to drive 3 lots of 7 LED segments and all the lookup tables that involves (without using dedicated chips), I can just use a BCD to decimal decoder, 10 HV NPN transistors to connect them to the Nixie cathodes and 3 HV PNP to sit in the anode circuits to handle the muliplexing. Other than requiring a second power supply to provide the 170V B+ for the tubes, the design is no more complicated than the one using LEDs and out of LEDs and Nixie tubes, I know which I would rather have to look at.

The Story So Far

The tubes are somewhere between here and the Ukraine; I believe that I have all other parts to hand, although I may acquire a new 32768Hz crystal rather than using a cannibalised one. Software does not look to be too much of a challenge – switch debouncing is new to me so I may even cheat and see if I can find a keyboard scanner chip or somesuch that will do the job for me and can be interrogated via an I2C or SPI bus.

Stay tuned!

Resources

  • Nixie discussion group and resources on the NEONIXIE-L Yahoo Group.
  • My original set of Nixie tubes and the ones that are on their way come from eBay member mycomponent in the Ukraine.  This vendor doesn't always have Nixie tubes listed, but often has Russian vacuum tubes (valves).

Enter the eyeProd

Preamble

Even though I hail from the Walkman era, until recently, I had never owned a personal/portable music player. I toyed briefly with the MP3 player facility on my mobile phone but found it a shocking piece of software and an absolute pain to use. Back last November, for some reason I cannot recall, my wife decided that she needed a personal MP3 player. After a quick read of some Choice (the Aussie version of ‘Which?’) reviews, it appeared that the most appropriate unit would be the Apple iPod Nano.

We quickly decided that such a device would also take care of my Christmas present, so a his’n’hers pair was ordered. And no, we did not wait until Christmas to open them; our Christmas was actually over by the end of November.

The big worry – are forty-somethings too old for such things?

Small, But Not Fiddly

I was not quite prepared for how small the iPod – or eyeProd as I decided to call them – was. With my less-than-nimble fingers, I was a little concerned as to whether I would be able to operate such a dinky device. However, my fears were ungrounded; the ergonomics of the controls are better than they look although I would have preferred a more tactile interface.

The screen, whilst a tiny 2 inches in diagonal, is clear and bright.

The only aspect that I do find a little fiddly is plugging in the USB cable – I think that the body of my eyeProd may be slightly distorted so there is something of a knack getting it plugged in.

Ear Cruds

Ear bud type phones have never appealed to me; I have fairly small ears and every type I have tried have a tendency to fall out unless I hold my hands over my ears. The Apple ear bud phones were no different in this respect from others of previous experience. Whilst in the right position, the sound quality was suprisingly good. However, having to sit or lie still without breathing was about the only way that I could prevent them from shifting and thus changing the sound.

In my mind, these ‘ear cruds’ detracted somewhat from the overall product so I was little disturbed when fate struck them a fatal blow.

I’m not quite sure how it happened, but I was sitting down listening to my eyeProd through my ear cruds when one of the dogs wanted to go out. I stood up and somehow the cord got caught around the dog and eyeProd and all got dragged across the room. Luckily, I keep the eyeProd in a plastic safety-case so no damage was sustained there. However, that was the end of the left channel of the phones. A quick poke around proved them to be anything but maintainer-friendly, so no repair was possible.

Bereft of the Apple phones, I tried my ancient, beloved and much-repaired Yamaha YHD-3’s. Much to my horror, the sound was absolutely ghastly. Fiddling around with the equalisation settings (sadly all presets) on the eyeProd just yielded various different forms of ghastliness – nothing that I would want to listen to. The next test was a pair of recently-acquired Jabra C820s noise-cancelling phones. (I got these for when I really need to concentrate on my work.) The sound from these was consistently boomy, irrespective of EQ setting, but I think that is a characteristic of these phones.

This left me with only one solution – to buy a new set of phones.

Sennheiser – We Make Speakers, Not Computers

A couple of hours Googling and reading reviews persuaded me that the Sennheiser PX200’s were the phones to go for. Whilst I did need to make an EQ tweak (I am using the ‘Acoustic’ setting) on the eyeProd, these phones proved to be a good match.

I suppose that it is inevitable that headphones made by a manufacturer of loudspeakers should sound better than those from a strangely cult-ish computer company. The PX200’s are everything that the Apple ear cruds weren’t: they are the most comfortable over-ear phones I have ever worn (better even than my trusty Yamahas), sound excellent (even when moving), have a property called ‘build-quality’ and look like they are the result of some fairly serious design work.

The PX200’s fold up very cleverly and can be stored in a plastic case which even keeps the cord tidy. The case is very similar in size and appearance to a spectacles case – see the photograph for comparison. Initially, I thought this was just a gimmick but this storage system is very practical and, once again, shows that some fairly serious design went into this product. Insert obligatory comment about German engineering if you will.

eyePrunes – Filling Up the eyeProd

The part of the whole eyeProd thing that makes me unhappy is having to use Apple’s iTunes (or eyePrunes, as I call it) software to get media on and off the device. This software is available only for Windows and (of course) Macintosh, which leaves those of us who use other operating systems rather out in the cold. I think that the main reason for this is DRM – Digital Rights Management. This is the means by which Apple can sell you encrypted music that can’t be shared illegally (unless you decrypt it – also illegally). DRM upsets some people terribly to the point of foaming at the mouth. I don’t really care about DRM myself – I just object to Apple’s monopolistic attitude. I’m surprised that they even condescended to provide a Windows version of eyePrunes.

Having got that minor rant out of the way, I will go on to say that there is software available that will supposedly let you use your eyeProd with Linux, but the one I tried (can’t recall what it was) trashed the database on the eyeProd causing me to have to do a factory reset and then load everything on again.

Until a couple of weeks ago, I was having to reboot my laptop into Windows XP every time that I wanted to add or change anything on my eyeProd. Unfortunately, as the laptop normally runs Linux, which keeps time as UTC rather than local time, the clock on Windows is always incorrect for my timezone as it assumes that the system time (in UTC) is the local time. This means that in a dual-boot situation, the clock on the eyeProd always shows UTC. Having recently deleted the Windows partition from my laptop and installed Windows XP under VMware instead, these issues are things of the past. I can now fire up eyePrunes in the Windows virtual machine – no reboots required and no issues with the clock being out-of-whack.

With the eyePrunes software, one can ‘rip’ CDs (even my modest collection took a fair while to transfer – a friend with a large collection has been at it for a couple of months) or purchase music downloads from Apple.

Buying Music From Apple – the iTunes Store

Through the eyePrunes software, one can search and purchase music directly from Apple. It really is a quick and simple process, given a decent Internet connection. I have started to try to rebuild much of my old music collection that got left in England due to it being on vinyl or cassette (well, the bits I still like, anyway). Purchasing from Apple is not only quicker, but also marginally cheaper than buying CDs. I still have to buy some CDs through Amazon, as some of the more obscure stuff (like early Kraftwerk) simply isn’t available from Apple. Say what you like about the DRM issue, but the eyePrunes Store works for me.

I have a long-held belief that you can only really judge the quality of a vendor or service provider after something has gone wrong. It just so happened that I had purchased the album ‘Foxtrot’ by Genesis from Apple and found a digital ‘blip’ a little way into the first track. (For those who still use vinyl, that’s like a bad scratch.) Using the appropriate mechanism on eyePrunes (another reboot!), I reported this and the very next morning received a very snotty e-mail from Apple along the lines of ‘tough luck, no refunds, read the terms and conditions.’ I wrote back pointing out that I didn’t want a refund, just an uncorrupted copy of my music and that, by the way, we do have such things as consumer laws in Australia. The next mail from Apple was of an altogether different tone, apologising profusely for the first e-mail (I have a mental image of someone at the other end being given a whack round the back of the head a la Basil Fawlty and Manual), refunding the purchase price and giving me some extra credits to use as I wished.

To conclude on this issue, Apple appears to have some excellent customer service staff – I just happened to stike a complete pillock first time round who is now probably cleaning the staff loos rather than being allowed anywhere near the helpdesk again.

Whilst I am more than happy to buy music from Apple, I am delighted to learn that Amazon will be extending its MP3 download service to countries outside the USA later this year (or so they tell me). Competition is a wonderful thing.

Back It Up!

The only option in eyePrunes for backing up ones music is to do so onto CD. No, I can’t find any way to make it work with a writeable DVD and there is no way that I want to be backing up several gigabytes of data onto CDs. My usual way of backing things up – in the Unix world – is using rsync. After some thought, I installed Cygwin on my Windows partition and set up a little shell script that could be invoked through a Windows batch file that would rsync my entire eyePrunes directory onto my file server. It works like a dream, all done over the network, no fiddling with blank media. My article on VMware describes this further, including how I was able to reverse the process to get my old eyePrunes directory onto my new Windows virtual machine. I’m sure that there are other ways to do this, but this works best for me. (Works best when plugged into my Gigabit Ethernet backbone rather than trying to do it over a wireless connection – that’s just a bit slow.)

Conclusion

Whilst I was able to get the bulk of my CD collection into the eyeProd’s 8Gb memory, music purchased from Apple means that I now have somewhat more than will fit on so a certain amount of juggling is required. It was quite impressive when I had all my Wagner operas – including the entire Ring Cycle – on there, but I now have to be a little more selective and only keep stuff on that I am likely to want to listen to before I next plan to plug into the computer. Not that plugging into the computer is the big issue it once was.

Now that I have a decent set of headphones to go with it, I have to say that I am very pleased with my little eyeProd. There are a few issues that I would like to talk to the software interface designers about, but these are things that I have got used to. It sure beats having to lug a laptop everywhere, which was how I listened to music before.

I give the iPod Nano 3rd Generation experience, including iTunes Store but excluding the Apple headphones a Smiffy Score of 8.5 out of 10.

And no, I’m not to old for one of these things.

Foxy Add-Ons: Tab Mix Plus

I have looked at a lot of Firefox extensions over time. One of the most useful that I have come across of late is Tab Mix Plus.  This extension gives the user vastly more control over tabbed browsing than un-extended Firefox. 

Whilst I have had the standard Firefox settings configured to open all new windows in new tabs, my online banking (ANZ) has persistently opened its logged-in session in a separate Window.  Due to some problems that I have been experiencing with the Fluxbox window manager and xinerama, the banking window inevitably opened very small on my smallest screen.  Not any more!  Now when I log into banking, I get a nice, new tab in my existing window.  That single feature makes Tab Mix Plus worthwhile for me. 

The Solstice Clock – Part 1

Preamble

My daily routines tend to be vague, imprecise and are subject to the fragilities of my heath, I am no slave to the clock. (The notable exception to this is when it is time to make the dinner; you can set your watch by it.) Over the last year, however, my fascination with the measurement of time, and the history of the same, has been on the increase.

For several years, I have been disenchanted by some of the artificial, arbitrary and often (to me) pointless aspects of modern, 'Western' timekeeping. Take daylight saving, for instance; I have read the various arguments for it, but have yet to see one which does not have a pertinent counter-argument or that justifies upsetting timekeeping around the globe. The changes in various countries and states are not even synchronous. (The USA and Europe are a couple of weeks apart in their change-over dates. In Australia, the state of Queensland does not even have daylight saving – and good for them, I say.)

A more recent annoyance that has come with my entering the age-group that might be termed 'grumpy old man' is the Gregorian calendar. Follow the link if you want to know more about this – I am not going to repeat at length what is recorded in innumerable places. I concede that the Julian calendar had a year that was a little too long and was getting further and further out of whack with the Tropical year. However, what really makes me grit my teeth is the totally arbitrary (in terms of the Tropical year) start point. The Vernal Equinox (Autumnal Equinox for those of us living in the Southern Hemisphere) tends to be the reference point for the Tropical year, but I can see that this would not fit in with the whole 'Rebirth of the Sun' thing, which would make the Winter (or Summer in the Southern Hemisphere) Solstice the reference point. But no, a point some 10 to 11 days after the Winter Solstice is what we've got to put up with.

Calendars

Let's turn our attention now to calendars in the physical sense. Without any further ranting about the artificial and arbitrary length of the weeks and months of the Gregorian calendar, what does this calendar mean to most of us? Generally, a set of 12 printed pages, broken down into grids so that we can see a correspondence between days of the week and days of the month. This grid may have pre-printed information telling us useful-to-know things like "Moon waxing gibbous" or "Sow mangold-wurzels now!". There may even be space to write our own information like "Wedding anniversary next week", "Wedding anniversary tomorrow", "Wedding anniversary", "Doh, missed it again! In dog house."

If we look at a clock, it tells us what time it is. If we look at the calendar described above, does it tell us what date it is? The answer is no. Despite the fact that calendars that tell you what the date is have been around for quite some time (e.g.: Stonehenge), the ubiquitous paper (or other medium) calendar gives us absolutely no idea of what date it is.

The Importance of Calendars: Food

What events of real importance are indicated by calendars? Irrespective of the calendar system used, the most important thing that I can think of that might be indicated by a calendar is the timings involved in agriculture – the sowing and harvesting of crops, the gestation of livestock, etc. Without these, we have no food. (I suspect that the world population is a little too large for a total reversion to a hunter-gatherer system.)

So, calendars can be of importance, in more widespread terms than the occasional murder due to forgetting one anniversary too many. Our graphic calendars, diaries and almanacs still do not help us know where we are in annual cycle. There are many seasonal indicators that can tell the farmer that it is time to start ploughing (like the snow may have melted so that there is actually ground visible to plough) and – of course – there are always the stars for those who know how to read them, and don't live somewhere that has a permanent overcast. The moon is always a good time-reckoner and many calendars are based on it – you still need a clear sky to watch it though and some way of keeping track of how many moons have passed since event X.

A Clock is a Fast Calendar

As I mentioned earlier, a clock can tell us what time it is. If we take a mechanical clock and add a few more gears (a divide-by-24 from the hour hand shaft), we can make it count days. If months were of a regular length, we could reduce further and have a months dial. Months of irregular lenght may also be dealt with, even for leap years – far more complex mechanics would be involved though.

If we were not concerned about displaying hours and minutes (and possibly seconds) on our mechanical clock, we could turn it into a calendar simply by making it tick slower – much slower.

The Slow Tick

If we take the Tropical year as being 365.24219 to 8 significant figures, we can calculate:

ns = 365.242 x 24 x 60 x 60 = 31556925 = seconds in a tropical year, to 8 significant figures.

If we decided that 12 hours on our clock was to represent a tropical year, we can divide the above number of seconds by the number of 1-second ticks of the clock (assuming that it has a 1-second tick) required to rotate the hands by 12 hours:

nt = number of ticks required to rotate hands by 12 hours = 60 x 60 x 12 = 43200

So, to work out the length of the tick that we would need to rotate the hands once in a Tropical year:

t = ns/nt = 730.48438 seconds, to 8 significant figures.

That means that our Slow Tick would occur roughly ever 12 minutes, 10.5 seconds.

A Tricky Escapement

I will leave it to some clever-clogs to work out how to make a mechanical clock escapement that only ticks every 12-and-a-bit minutes (no down-gearing allowed!)

As I am not particularly interested in modifying a traditional, purely mechanical clock for these purposes, I will look at how an electro-mechanical clock may be used instead.

Quartz clock movements may be obtained cheaply from hobby suppliers. However, entire clocks can be obtained even more cheaply from 'cheap' shops. With the latter, you get a face and a case thrown into the bargain, so have little to do in the way of mechanical construction.

My practical research for this article has so far extended to obtaining and dissecting a quartz clock obtained from a local supermarket for $12 AUD. Once the movement is removed, it looks very much like every other cheap quartz movement that I have seen over the last few years. The drive, contrary to what I suspected, does not consist of a solenoid that is simply pulsed every second with some kind of pawl and ratchet mechanism, but of a cylindrical magnet between the poles of a solenoid that would require a reversing field every second. (If a simple pulse train of fixed polarity were applied, the magnet would move possibly once, then just twitch slightly every time a pulse came along.)

Part 2 details some thoughts on the pulse generator which will drive the Solstice Clock and how it can become more than just a Solstice Clock.

VMware – No Developer Should Be Without It

Many years ago, a colleague told me how he no longer needed to use Windows on his laptop, because he had something called VMware.

The main attraction for me of VMware is that it allows me to test things on Windows without having to re-boot my computer or even fire up a separate machine.

At one point, I even acquired an evaluation license for VMware, but getting it all working just looked too hard.

In a recent correspondence with one of my learned colleagues at GAWDS, the – in my mind – unlikely combination of the words ‘VMware’, ‘installation’ and ‘easy’ came together. I thought then that maybe it was time to have another, and closer, look at this product.

After looking at various wikis, I was under the impression that it might be able to run my dual-boot Windows XP partition through VMware, under Linux. That is, I boot Linux, then run Windows in a virtual machine. The thought of being able to do this without having to reboot into the other operating system was very attractive – so much so that I purchased and downloaded VMware Workstation.

VMware Workstation is available both as an rpm package for those using RedHat and SuSE-derived distributions, and as a tarball. I had a quick look to see if there was a Gentoo ebuild available, but it appeared that all the recent editions were masked so I elected to use the ‘official’ tarball, as I might at least be able to get some support if things went wrong.

The installer in the VMware Workstation tarball assumes that somewhere on the system there are directories rc0.d through rc6.d. On Gentoo, this is not the case. I ended up creating these directories, just so I could get the installer to to its work. Installation went smoothly and none of the questions asked raised any issues or caused me to seek help. A reboot was required before things would work correctly and I had to invoke the programme thus:

VMWARE_USE_SHIPPED_GTK="yes" vmware

I followed the appropriate instructions to set up my Windows partition to run as a virtual machine, started the virtual machine, selected the ‘Windows’ option on the Grub menu and had the whole machine freeze on me. This was starting to seem too much like hard work. The instructions advised that the preferred method was to install the guest system in a virtual disc rather than running from the physical disc. I decided that I wasn’t going to be nuts about trying to get the physical disc thing working, so dug out a Windows XP Home Edition DVD and key which I wasn’t using and installed from scratch.

I have to confess that I was expecting all sorts of horrible things to happen, but the installation and application of all the Windows updates since XP Service Pack 1 went without a hitch. Everything just worked.

The ultimate test was to see whether I could install iTunes, restore the iTunes directory from my backup (not a backup made through iTunes, I hasten to add) and get my iPod to synchronise. (As I am not doing much testing with Internet Explorer at the moment, iTunes is currently the most frequent reason for having to reboot into Windows.) My backup had been made by rsynch’ing the iTunes directory to my file server, under Cygwin. Once again, I was expecting something horrible to happen when I started the DRM-rich iTunes. Once again, nothing horrible did happen – I just got asked for my password and had to authorize the ‘new’ computer with the iTunes store.

Plugging in my iPod brought up a message saying that VMware was having to disconnect it from the regular driver (USB storage, through udev) to enable it to work with iTunes – I paniced a bit when I saw the message, but was quickly relieved when I saw my iPod appear in the iTunes window. I even purchased the album “She’s So Unusual” by Cyndi Lauper and installed it on my iPod.

The preload Windows partition is now – as far as I am concerned – totally redundant. In fact, I have now deleted it, formatted it as ext3 and now have the directory containing my virtual machine mounted on it. Important: before ‘blowing away’ the original Windows partition, I first went into iTunes and de-authorised the machine.  If I had not, my iTunes music would be authorised for only four computers – the other one being lost.  I don’t know if there is any way in which one can ask Apple to de-authorise a machine that no longer exists, but think it better to take this simple step rather than having the problem in the first place.

Whilst I think that VMware is an excellent piece of software – far easier to use than anticipated – and that every developer should have a copy, I would make one caveat: running virtual machines needs powerful hardware unless one likes having a machine that runs slightly slower than continental drift.  The laptop on which I am running VMware has a dual-core 1.85GHz processor and 1.5Gb RAM.  (0.5Gb allocated to VMware, along with 1 CPU core.)  I would not want to try running with less resources and am considering upgrading to 3Gb RAM so that the virtual machine can run with a full 1Gb, the rest being left for the GNU/Linux system.

 

Am I Too Slow?

Preamble

From the introduction to the Web Content Accessibility Guidelines, 1.0:

For those unfamiliar with accessibility issues pertaining to Web page design, consider that many users may be operating in contexts very different from your own … They may have … or a slow Internet connection.

Despite the availability of broadband Internet connections becoming more widespread, slow connections continue to be an issue with growth of web access via mobile devices. Whilst we have no control over how fast a user's connection is, there are things that we can do to make life easier – and faster – for those with slow connections. Connection speeds, however, are not the only speed-limiting factors in the delivery of web content. This article describes some of factors that can impact how quickly web content may be delivered and rendered, with suggestions as to how we can make improvements.

Time Trials

Whilst dusting off an old 28,800 modem and using it to connect to the Internet is one way to get a feel for the overall performance of a web site, it is not exactly practical – no more so than connecting via a cellphone (also very expensive).

The tool I tend to use for checking speed is an online service from WebSiteOptimization.com – the free Website Performance Tool and Web Page Speed Analysis. For those using the Web Developer Toolbar for Firefox, there is a shortcut to this service via Tools->View Speed Report.

Size Counts

Before I go into the more complex issues of dynamic sites and technical stuff about web servers, let's have a look at the issues that can affect simple, static web sites. (All the issues here apply to dynamic sites as well.)

Images and Other Media

Remember the days when it took several minutes for a large image to render on the screen? Just spare a thought for those who still have connections that slow.

What is the problem? Too many images? Too large images? Too many, too large images? (When I say images here, this applies equally to any other media that are loaded by default with the page.) The answer is really one of overall size. Look at the sizes of all the images that load with a page, add them together, then add the size of the page itself plus any linked stylesheets or scripts. The greater that total size, the longer the page will take to load and render.

Questions to ask yourself:

  • Do I really need all those images – are they essential for making the page look good, or do they just add clutter and distract the eye from the subject matter?
  • Do my images need to be that big? (Thumbnails may be linked to larger images for those who want to see all the gory detail and don't mind waiting.)
  • With JPEG images, how much can I increase the compression without noticeable loss of quality? (The answer is sometimes "quite a lot".)
  • Photographs: are these photographs well-composed, or could they benefit from cropping (and thus reducing the size)?

What's All That In Your Document <head></head>?

I would have to put up my hand to having created pages where document <head></head> is larger than the <body></body> – generally due to the inclusion of large amounts of Dublin Core metadata (see my previous article, 'Metadata, Meta Tags, Meta What?'). There are lots of things that should be up there in the <head></head>, but there are some things that may be better placed elsewhere:

Styling
Unless your site has only one page, forget about having <style></style> in your <head></head>; use an external CSS and provide a link. If your user agent (web browser) and the web server are both behaving properly, your external stylesheets should be requested from the server once and then cached somewhere on your local computer. If, however, you are duplicating that information in the <style></style> of every page, you are pulling that data down every time a page is loaded. Don't forget that this is in addition to the half-a-megabyte of banner image that you created at the wrong resolution and then scaled using CSS.
Scripts
Whilst there are some scripts that may only be required on one page, any that need to run on multiple pages should be stored externally. Once again, caching will call the script file from the server once rather than on every page load.

I Don't GET It

When you ask your user agent to fetch you a web page to read (or look at the pictures), whilst you are saying "bring me that page with all the pictures on it", the user agent has to do far more work than you might expect. The web is all based on HTTP transactions. (And you thought that the http:// at the beginning of URIs was just there to be annoying.) Let's consider a hypothetical page with 2 CSSs, 4 images and a Google Analytics tracker. When you say "bring me that page with those 4 nice images that my friend told me about", the user agent has to go through all this:

  1. Contact the server and issue a GET request for the HTML page itself.
  2. Have a look at the HTML page when received and make a list of all the other GETs that it needs to do.
  3. GET a CSS.
  4. GET another CSS.
  5. GET a background image specified in one of the CSSs.
  6. GET image #1
  7. GET image #2
  8. GET image #3
  9. GET image #4
  10. GET urchintracker.js from Google, and wait five minutes for it to turn up
  11. Render the page.

In case you weren't counting, that was 9 HTTP transactions to bring you that one page. Although all the to-ing and fro-ing of an HTTP transaction doesn't (usually) take that long, each transaction does take a finite amount of time. If you can put all your CSS in one file (assuming it's all for the same media type or all media types), do so – that's one less HTTP transaction to slow things down.

The facetious comment about Google Analytics comes from bitter experience – I have, on many occasions, had to wait for pages to finish loading, just for some piece of JavaScript that tracks sea urchins. Not being unduly interested in sea urchins (or other peoples ability to track site visitors), the Firefox Adblock extension saves me that HTTP transaction every time.

Update:

I am advised by a reader that a much faster Google Analytics script is now available – according to Google. I will believe this when I see it.

No, Not Here, Over There

Redirects can be really, really handy when writing web applications; just don't over-do them, as every one means an extra HTTP request.

Server Tips and Tricks

Squish!

As we all know, the Internet is a set of tubes. To get things to move through tubes faster, we can squash them up nice and small. I just saved this article, as far as I have written, to a file and looked at its size, which was 19399 bytes. I then squashed it up nice and small using a tool called gzip, after which it was 5971 bytes – that's less than a third of the original size. Text files – HTML, CSS, JavaScript – squash down really well. Image files are another case since many image file formats allow for compression. If you have compressed a JPEG image as much as you can, trying to squash it down yet further using gzip can – in some circumstances – make it bigger. Strange, but true.

But how can you squash your files? This is something that can be set up either in your web server configuration or, if you are running a dynamic site, can be done in the web application itself. Not every user agent can handle squashed files so either the web server or our software has to look at a line of the HTTP request that says something like:

Accept-Encoding: compress, gzip

This means that we can squash our files using formats compress and gzip or:

Accept-Encoding: compress;q=0.5, gzip;q=1.0

This means that both compress and gzip are OK, but I prefer gzip. (Personally, I prefer a super-squasher called bzip2, but I haven't heard of it being supported by user agents.)

In every silver lining there is, however, a cloud; whilst our squashed up files may go through the tubes a lot quicker, there is computer overhead at both ends as the web server (or application software) needs to do the squashing before it sends the files off and the user agent has to un-squash it before it can be rendered. (Visions of trying to unpack an over-stuffed rucksack spring to mind.)

Tune Up

If you are running your own web server, you did read all the documentation didn't you? (Ha!) Assuming that it was so long ago that you have forgotten, try Googling for: Apache tuning spare-servers. (If you don't use Apache, substitute the name of your own web server software and strike out the spare-servers bit.) Getting your server configuration right can make a big difference in how quickly you can service incoming requests, especially when things get busy.

Hardware (Technical Stuff Alert)

If you are not only running your own web server, but are doing so on your own hardware, put as much RAM in it as it will take or you can afford. Use RAID, not just for security, but for performance. Use fast discs with the fastest interfaces. Use multi-core CPUs. Build your LAMP components (assuming a LAMP environment) specifically for your processor architecture and with the appropriate optimisation flags set.

Even if you are running a virtual private server on someone elses hardware, you can generally pay a little extra to increase your RAM. Do it. The less the operating system has to swap, the sooner your web content gets to your customers, or your customers customers.

Stop running SETI@Home on your web servers – it really doesn't help matters.

Mind Your Language (More Technical Stuff)

Web applications can slow things down too! Here are a few bullet-point tips for those who write and use web applications:

  • If you are new to programming, don't be satisified that your programme works – make sure that it works efficiently. Take the time to really learn your language of choice – and that includes the SQL and features of whatever RDBMS you are using. PHP is so easy to code that it is easy to code badly. A bit like cars with automatic transmission – anyone can drive one through their neighbour's front window. If you do not have a programming background, try learning a "real" language like C – the discipline should do wonders for your PHP coding skills. I would recommend 'C All-in-One Desk Reference For Dummies' by Dan Gookin (the guy who wrote the original 'DOS for Dummies') as an ideal beginners text. If you are able to learn from Kernighan & Ritchie, you must already be a programmer and need no further telling.
  • Don't run PHP as CGI – use the appropriate Apache module.
  • If you use Perl and your site is getting big/busy, start converting your code to run with mod_perl before everything starts to slow down. (For an example of a large site running on mod_perl: Slashdot.)
  • Use sub-selects in your SQL language – try to keep down recursion (do a query, do something with it, do another query based on that) in PHP/Perl – it's inefficient. The less calls you make to the database – and the more you can get the database to do for every call (think stored procedures), the faster things will run.
  • Consider having tables of cached content such as metadata, navigation structures, etc., that are updated when pages are changed. These often involve complex queries which can impact performance on busy/large sites, if run every time a page is requested. Caching the output of complex queries means that those queries are run only once when the page is created – simpler, faster queries are then used to deliver the content.
  • For content that is not changed often, consider caching it as static pages as these can be served much quicker than having to run a programme every time the page is requested. Reverse proxies can be useful here, too.
  • If you are going to be searching on a database field, make sure that it is indexed. MySQLs fulltext indexing is very powerful, and very fast.
  • When designing your database, make it so that fields that link to other tables are integers. You can't get any faster than integer comparisons. (Don't forget to index those fields too.)
  • If you really want blinding performance and can't just throw more hardware at it, consider moving to a compiled language like C.

Conclusion

Speed is an accessibility issue and the things that slow down the delivery of web content are cumulative in effect. Every little thing that you can do to get your content to your audience is worth it – and may mean the difference of gaining a sale (or whatever) or having your prospective client get fed up with waiting and going elsewhere.

This article was written for the February 2008 edition of the newsletter of the Guild of Accessible Web Designers (GAWDS).

Matthew Smith asserts the right to be identified as the original author of this work.