Monthly Archives: February 2008

Rice Worms!

rice noodles

Chinese New Year was upon us, so I decided to do something with noodles. The only gluten-free noodles that I could find in the pantry were some sort of rice vermicelli, which I was later told was pretty ghastly anyway. Whilst I had made reasonably successful gluten-free pastas before with maize (corn) and amaranth (not in the same batch), rice was a new venture for me.

2 cups (200ml cups, that is) of rice flour, a teaspoon of guar gum and two eggs yielded a not-too-sticky dough that was ideal for the pasta extruder. Whilst my extruder lacks a die for any form of noodles or spaghetti, it does have one for very small maccharoni (probably maccharonininini or something) which I used to produce my ‘rice worms’.

There was little problem with the extruded and cut pasta sticking to itself or the plate so I was able to dry it on a tea towel without having to be careful to keep the pieces separate.

As this pasta is a little more physically delicate than the wheat variety, I added it to my chicken chow mein after the vigorous stir-frying had been done and actually let it cook by steaming, with the lid on the wok.

The result was very nice indeed – both of us enjoying it immensely.

It does take a fair bit of effort to make pasta like this – at least it does when I do it – but the yield of the batch was a large one so there was enough left over for Jane’s lunch the next day, which justifies the time spent. Larger batches would probably be even more economical on time, allowing me to prepare enough for a few meals, most going in the freezer.

Am I Too Slow?


From the introduction to the Web Content Accessibility Guidelines, 1.0:

For those unfamiliar with accessibility issues pertaining to Web page design, consider that many users may be operating in contexts very different from your own … They may have … or a slow Internet connection.

Despite the availability of broadband Internet connections becoming more widespread, slow connections continue to be an issue with growth of web access via mobile devices. Whilst we have no control over how fast a user's connection is, there are things that we can do to make life easier – and faster – for those with slow connections. Connection speeds, however, are not the only speed-limiting factors in the delivery of web content. This article describes some of factors that can impact how quickly web content may be delivered and rendered, with suggestions as to how we can make improvements.

Time Trials

Whilst dusting off an old 28,800 modem and using it to connect to the Internet is one way to get a feel for the overall performance of a web site, it is not exactly practical – no more so than connecting via a cellphone (also very expensive).

The tool I tend to use for checking speed is an online service from – the free Website Performance Tool and Web Page Speed Analysis. For those using the Web Developer Toolbar for Firefox, there is a shortcut to this service via Tools->View Speed Report.

Size Counts

Before I go into the more complex issues of dynamic sites and technical stuff about web servers, let's have a look at the issues that can affect simple, static web sites. (All the issues here apply to dynamic sites as well.)

Images and Other Media

Remember the days when it took several minutes for a large image to render on the screen? Just spare a thought for those who still have connections that slow.

What is the problem? Too many images? Too large images? Too many, too large images? (When I say images here, this applies equally to any other media that are loaded by default with the page.) The answer is really one of overall size. Look at the sizes of all the images that load with a page, add them together, then add the size of the page itself plus any linked stylesheets or scripts. The greater that total size, the longer the page will take to load and render.

Questions to ask yourself:

  • Do I really need all those images – are they essential for making the page look good, or do they just add clutter and distract the eye from the subject matter?
  • Do my images need to be that big? (Thumbnails may be linked to larger images for those who want to see all the gory detail and don't mind waiting.)
  • With JPEG images, how much can I increase the compression without noticeable loss of quality? (The answer is sometimes "quite a lot".)
  • Photographs: are these photographs well-composed, or could they benefit from cropping (and thus reducing the size)?

What's All That In Your Document <head></head>?

I would have to put up my hand to having created pages where document <head></head> is larger than the <body></body> – generally due to the inclusion of large amounts of Dublin Core metadata (see my previous article, 'Metadata, Meta Tags, Meta What?'). There are lots of things that should be up there in the <head></head>, but there are some things that may be better placed elsewhere:

Unless your site has only one page, forget about having <style></style> in your <head></head>; use an external CSS and provide a link. If your user agent (web browser) and the web server are both behaving properly, your external stylesheets should be requested from the server once and then cached somewhere on your local computer. If, however, you are duplicating that information in the <style></style> of every page, you are pulling that data down every time a page is loaded. Don't forget that this is in addition to the half-a-megabyte of banner image that you created at the wrong resolution and then scaled using CSS.
Whilst there are some scripts that may only be required on one page, any that need to run on multiple pages should be stored externally. Once again, caching will call the script file from the server once rather than on every page load.

I Don't GET It

When you ask your user agent to fetch you a web page to read (or look at the pictures), whilst you are saying "bring me that page with all the pictures on it", the user agent has to do far more work than you might expect. The web is all based on HTTP transactions. (And you thought that the http:// at the beginning of URIs was just there to be annoying.) Let's consider a hypothetical page with 2 CSSs, 4 images and a Google Analytics tracker. When you say "bring me that page with those 4 nice images that my friend told me about", the user agent has to go through all this:

  1. Contact the server and issue a GET request for the HTML page itself.
  2. Have a look at the HTML page when received and make a list of all the other GETs that it needs to do.
  3. GET a CSS.
  4. GET another CSS.
  5. GET a background image specified in one of the CSSs.
  6. GET image #1
  7. GET image #2
  8. GET image #3
  9. GET image #4
  10. GET urchintracker.js from Google, and wait five minutes for it to turn up
  11. Render the page.

In case you weren't counting, that was 9 HTTP transactions to bring you that one page. Although all the to-ing and fro-ing of an HTTP transaction doesn't (usually) take that long, each transaction does take a finite amount of time. If you can put all your CSS in one file (assuming it's all for the same media type or all media types), do so – that's one less HTTP transaction to slow things down.

The facetious comment about Google Analytics comes from bitter experience – I have, on many occasions, had to wait for pages to finish loading, just for some piece of JavaScript that tracks sea urchins. Not being unduly interested in sea urchins (or other peoples ability to track site visitors), the Firefox Adblock extension saves me that HTTP transaction every time.


I am advised by a reader that a much faster Google Analytics script is now available – according to Google. I will believe this when I see it.

No, Not Here, Over There

Redirects can be really, really handy when writing web applications; just don't over-do them, as every one means an extra HTTP request.

Server Tips and Tricks


As we all know, the Internet is a set of tubes. To get things to move through tubes faster, we can squash them up nice and small. I just saved this article, as far as I have written, to a file and looked at its size, which was 19399 bytes. I then squashed it up nice and small using a tool called gzip, after which it was 5971 bytes – that's less than a third of the original size. Text files – HTML, CSS, JavaScript – squash down really well. Image files are another case since many image file formats allow for compression. If you have compressed a JPEG image as much as you can, trying to squash it down yet further using gzip can – in some circumstances – make it bigger. Strange, but true.

But how can you squash your files? This is something that can be set up either in your web server configuration or, if you are running a dynamic site, can be done in the web application itself. Not every user agent can handle squashed files so either the web server or our software has to look at a line of the HTTP request that says something like:

Accept-Encoding: compress, gzip

This means that we can squash our files using formats compress and gzip or:

Accept-Encoding: compress;q=0.5, gzip;q=1.0

This means that both compress and gzip are OK, but I prefer gzip. (Personally, I prefer a super-squasher called bzip2, but I haven't heard of it being supported by user agents.)

In every silver lining there is, however, a cloud; whilst our squashed up files may go through the tubes a lot quicker, there is computer overhead at both ends as the web server (or application software) needs to do the squashing before it sends the files off and the user agent has to un-squash it before it can be rendered. (Visions of trying to unpack an over-stuffed rucksack spring to mind.)

Tune Up

If you are running your own web server, you did read all the documentation didn't you? (Ha!) Assuming that it was so long ago that you have forgotten, try Googling for: Apache tuning spare-servers. (If you don't use Apache, substitute the name of your own web server software and strike out the spare-servers bit.) Getting your server configuration right can make a big difference in how quickly you can service incoming requests, especially when things get busy.

Hardware (Technical Stuff Alert)

If you are not only running your own web server, but are doing so on your own hardware, put as much RAM in it as it will take or you can afford. Use RAID, not just for security, but for performance. Use fast discs with the fastest interfaces. Use multi-core CPUs. Build your LAMP components (assuming a LAMP environment) specifically for your processor architecture and with the appropriate optimisation flags set.

Even if you are running a virtual private server on someone elses hardware, you can generally pay a little extra to increase your RAM. Do it. The less the operating system has to swap, the sooner your web content gets to your customers, or your customers customers.

Stop running SETI@Home on your web servers – it really doesn't help matters.

Mind Your Language (More Technical Stuff)

Web applications can slow things down too! Here are a few bullet-point tips for those who write and use web applications:

  • If you are new to programming, don't be satisified that your programme works – make sure that it works efficiently. Take the time to really learn your language of choice – and that includes the SQL and features of whatever RDBMS you are using. PHP is so easy to code that it is easy to code badly. A bit like cars with automatic transmission – anyone can drive one through their neighbour's front window. If you do not have a programming background, try learning a "real" language like C – the discipline should do wonders for your PHP coding skills. I would recommend 'C All-in-One Desk Reference For Dummies' by Dan Gookin (the guy who wrote the original 'DOS for Dummies') as an ideal beginners text. If you are able to learn from Kernighan & Ritchie, you must already be a programmer and need no further telling.
  • Don't run PHP as CGI – use the appropriate Apache module.
  • If you use Perl and your site is getting big/busy, start converting your code to run with mod_perl before everything starts to slow down. (For an example of a large site running on mod_perl: Slashdot.)
  • Use sub-selects in your SQL language – try to keep down recursion (do a query, do something with it, do another query based on that) in PHP/Perl – it's inefficient. The less calls you make to the database – and the more you can get the database to do for every call (think stored procedures), the faster things will run.
  • Consider having tables of cached content such as metadata, navigation structures, etc., that are updated when pages are changed. These often involve complex queries which can impact performance on busy/large sites, if run every time a page is requested. Caching the output of complex queries means that those queries are run only once when the page is created – simpler, faster queries are then used to deliver the content.
  • For content that is not changed often, consider caching it as static pages as these can be served much quicker than having to run a programme every time the page is requested. Reverse proxies can be useful here, too.
  • If you are going to be searching on a database field, make sure that it is indexed. MySQLs fulltext indexing is very powerful, and very fast.
  • When designing your database, make it so that fields that link to other tables are integers. You can't get any faster than integer comparisons. (Don't forget to index those fields too.)
  • If you really want blinding performance and can't just throw more hardware at it, consider moving to a compiled language like C.


Speed is an accessibility issue and the things that slow down the delivery of web content are cumulative in effect. Every little thing that you can do to get your content to your audience is worth it – and may mean the difference of gaining a sale (or whatever) or having your prospective client get fed up with waiting and going elsewhere.

This article was written for the February 2008 edition of the newsletter of the Guild of Accessible Web Designers (GAWDS).

Matthew Smith asserts the right to be identified as the original author of this work.