Australian Lefty on Politics, Governance, Science and Info Management

Implications of off-line parliamentary webserver

Posted by Dave Bath on 2009-01-11

The implications of the federal parliamentary website being out of action (see my earlier post and Club Troppo for more info) are serious, whatever the cause, because not only was it offline for 12 hours or more, it died mid-transaction.

The cleaner tripped over the power cord theory

Cleaners tripping over power code, mice chewing network cables, and other accidents are possible causes, but if this is what happened, then the DRP (disaster recovery plan) and information risk management strategies are useless, and IT managers should be taken out and shot.

Clustering, failover servers, uninterruptible (or at least "graceful shutdown") power supplies and similar technologies are now stock-standard even for medium-sized companies.

This is unlikely to be the cause unless there is systemic mismanagement (but then again, that is all too common).  If this is the cause, there are no valid excuses apart from a meteorite or bomb hitting the building, in which case, it would be all over the news this morning.

What can we expect from aspx systems?

The site probably went down after getting an on-screen message acknowledging my upload of a submission to a senate committee inquiry, telling me to expect a confirmation email, and, because that email never arrived (and it is 24 hours since I should have got it), before that email "left the building".

This is effectively a failure in the middle of an electronic transaction.  What other electronic transactions within were lost, or indeed, remain vulnerable to future loss?

This would be a significant software design or execution failure.  I was using a relatively new part of the parliamentary website, with .aspx extensions in URLs, indicating use of Microsoft ASP.NET.  Either Microsoft software is at fault (which as a UNIX bigot, doesn’t surprise me), or the software development and deployment process was hopeless.  My opinion of rigor in many younger (particularly Microsoft-blinkered) developers, and older project managers, suggests that the contract for supply and oversight of the introduction of new functionality was flawed.

Again someone should be shot.

Any failure due to poor software in the middle of an electronic transaction is serious, and calls into question the integrity of other software (and therefore data) throughout the parliamentary IT system and the wider domain.

ISP Failure

The domain name service (and presumably provider of other internet transport services) is run by Optus, as noted by clarencegirl.  This surprised me… I’d expected Telstra to have inherited that responsibility from Telecom days, and wonder when we switched to a company largely owned (indirectly) by the Singapore government.

However, name to numeric address translation was working, and other high-level government websites (such as were functioning normally (even though the internet seemed somewhat bogged down) suggesting that Optus was not at fault.


Now it starts to get interesting.  As many are aware, testing of Conroy’s controversial content filtering and censorship scheme is underway, and many predicted problems because of it.  This was indeed my first guess as to the problem.

If testing of the filtering system is related to the parliament being offline, this would be both embarrassing to the government, and poetic justice.

Even if filtering was not the cause of parliament going off the air, the internet was very sluggish, which could justify the expectations of technical experts that warned of just these problems.

Filtering – False positive blocking of legitimate content

If content filter tests are too blame, then the most generous reason would be false-positive blocking of legitimate content, because of the number of times dog-whistling "think of the children" keywords would be found in recent Hansard transcripts – you know the sort of words I mean.

This would be analogous to a recent blocking of the entire wikipedia domain within England.

It is worth noting that the Attorney-General and related sites, which contain lots of keywords relating to terrorism, were not offline.  This would be consistent with the subject matter of the trial, but if the blacklisted keywords are extended (Conroy has indicated this will happen), it is likely that the AG, emergency response, and law-enforcement sites would suffer.

Illegitimate content inside

The more-horrifying thought is that there was illegitimate and toxic content within, which was either correctly blocked by the filter, or someone inside "pulled the plug" to avoid discovery having used those systems they thought wouldn’t be looked at.

If this was indeed the case, then I’d suspect it was someone in the parliamentary IT team.  Whether such a possible criminal was a politician, a staffer, a cleaner, or a technologist, good practice within parliamentary IT should have detected this earlier.

If police and former judges can be found to have been engaged in such noxious activities, it is not impossible that someone inside could be similarly disgusting, thinking they were sitting safely invisible behind parliamentary security systems.

Again, someone should be taken out and shot.  Actually two: the offender and the IT manager (unless they are the same person).


If crackers are to blame, there are two questions:

  • Why wasn’t parliament more secure?
  • Who were they?

The first (again) points to gross negligence by management.

But who might the crackers be?

The last big crack of Oz government sites that I know of directly was in May 2001, when a US spy plane was on the tarmac next to the Chinese equivalent of Pine Gap.  This period is infamous in IT security circles, and was acknowledged publically by high-ranking CIA staff in a documentary aired by SBS a few years ago.

Is there an international diplomatic crisis now as bad as the one in 2001?  You bet!

The countries capable of such attacks (both means and motive) include "non-allies" such as Russia and China, and "allies", including the ECHELON group (UK/US/Oz/NZ/Canada) and Israel, who have pretty much the best cryptographers and crackers around.

Given the many reports of Israeli crack attacks recently, and the close ties between IT security systems of Israel and ECHELON countries (indeed, fear of what Israel could do to IT systems may be driving the inaction of some countries more than other reasons), together with the relatively immaturity of other parties involved in the current conflict, if another government is involved, directly or by unacknowledged proxy, I know where I’d put my money… although the motive would remain a mystery (unless there was concern that Australia was about to change its "Neville Chamberlain Peace in our time" masterly inactivity).

Either way, if our parliament is open to crack attacks, from "friends" or non-friends, someone ought to be taken out and shot.

Assigning political culpability

The buck for this failure, however caused, must stop with politicians.

The problem is that it is only a year since the ALP took office federally, while most cultural and technical changes driven by CEOs (the equivalent of government ministers) take between two and five years to filter through into general infrastructure and practice.

Thus it is hard to point the finger at either the Howard government for throwing money unwisely (to be seen as doing something about security problems), or on the Rudd government for imprudently rushing to effect change.

This highlights the good work of Lindsay Tanner in promising to implement in full the recommendations of the Gershon report which was scathing about Australian government IT (both in cost and capability terms) inherited from the Howard regime.

The Bottom Line

The failure of the parliamentary webserver, and possibly internal systems, raises grave questions about the integrity (confidentiality and correctness) of government IT systems generally.

We are not talking about an excusable 5 minute dropout, we are not talking about a "North Western Victorian Dingo Control Authority Regional Office", we are not talking about a scheduled maintenance period in the middle of the night (but more most of a Saturday afternoon while most shops are open).

The buck stops with management, i.e. the politicians, whatever the proximal cause of the failure.

If the cause relates to Conroy’s content-filter censorship plans, then this failure should stop this dain-bramaged approach immediately.

In a world increasingly threatened by simple cyber-criminals, cyber-terrorists, and state-sponsored cyber-warfare, the Australian public deserve better service, deserve an explanation, and deserve to see the appropriate scalps taken.


14 Responses to “Implications of off-line parliamentary webserver”

  1. Lyn said

    Playing devil’s advocate, systems maintenance.

  2. Dave Bath said

    Lyn…. “systems maintenance”

    Doubt it VERY much.
    * Most importantly…. mid transaction!
    * Secondly (and if you are right, this also indicates poor practice) no on-screen warning just beforehand that parliament was about to go down.
    * As I said, there are clusters etc these days…. 24/7 is practical (think banks, wordpress, Google, etc)

    As a sysadmin from the mid 1980’s, for SCHEDULED maintenance (from say 17:00) we’d always gave warnings on login ALL DAY (if not the day before), and even in emergency situations, apart from a kernel panic (see also Screens of Death), we’d wall(1) everyone with typically 5 minutes grace time so (wherever possible) they could save files/records they were editing.

    Although you may be correct, a rude shutdown without warnings for “maintenance” is a Bad Thing, should not be tolerated, but is unsurprising in these latter days of poor rigor (but IT cognitive rigor mortis).

  3. Lyn said

    Thinking about your alternatives, what period of time would you expect to be a reasonable estimate for fixing them, from noticing the problem to up and running again? I wouldn’t know about the other options, but 12 hours seems reasonable for systems maintenance to me, having been on the receiving end of it often.

    While it’s reasonable to expect your average IT person to know shutdown without warning is a bad thing, is it equally reasonable to expect your IT person in the Canberra bubble to think of such things?

    That last is crummy, true, but you could make the same point re the submissions problems you’ve pointed out. And given this government as a whole’s understanding of the expectations of regular net users, you’d hardly be expecting them to demand better value for money on anything IT related.

    That’s all I have I’m afraid. The devil will have to take care of himself from here on in.

    My personal favourite is a filtering glitch.

  4. Dave Bath said

    “Reasonable amount of time”….

    A few seconds these days is reasonable, even in the presence of hardware crashes. See High availability cluster.

    Workmen digging a trench and cutting cables? Negligent planning/approval/oversight. Shoot someone.

    What would you put up with from a bank? From Google? From wikipedia? The lost/duplicated transaction problem though is more important for banks and tax offices. (Which bank?)

    And filtering glitch /was/ my immediate hunch, and my earnest hope.

  5. Lyn said

    “A few seconds….”

    I see. Well that’s different then.

    I’ll go with filtering, even if it’s not true. Just because it would be just.

  6. zombinol said

    It does make perfect sense, you know, that Optus provide Government services like DNS, web hosting or Australian defence communications, you see it fits with the Governments outsource model at all cost, to use more reliable and fundamentally better commercially managed providers of IT services services than any home grown provider.

    And it also fits with a shrewd security model where your security of managed IT services must be impeccable to trust it to, well how shall we characterise Optus, we’ll call them the telco of the Government that owns more Australian infrastructure than our government does because they can manage it better. I mean imagine if the Australian Government was left to their own devices to manage all that Government IT stuff, there would be more of this sort of stuff when a responsible porn community had to employ their own filtering to save tax dollars being wasted by Government staff.

    So given all that, its probably just a technical glitch with no political, espionage or vacuum cleaners in the data centre type issues and worthy a response from the, I am sure webmanager (is that their first or last name?) would ensure that your submission WAS received.

    I wonder if Satyam had anything to do with the management of the service and the outage was a cut over to some one more responsible like Jim’s Global IT Services, I believe there are franchises available in your area, please call 13 JIMS.

  7. […] Implications of off-line parliamentary webserver […]

  8. […] Posts Implications of off-line parliamentary webserverOz Parliament Website DeadOz Parliament website up and stumblingAnti-net-censorship toolsFunding for […]

  9. Richard said

    FWIW, there was a scheduled power shutdown at Parliament House from Saturday afternoon to Sunday morning. Building occupants were warned of disruptions to computer (and many other) services.

  10. Dave Bath said

    (Richard entered an email with his comment).

    Let’s accept that at face value rather than a “dog at my homework”.

    OCCUPANTS were warned. The applications to the public WERE NOT. And an electronic transaction was lost.

    And there is no failover/handover to another site? THAT’s interesting! Sounds like everything could be taken out with a single meteorite or something like it.

    As I commented here, scheduled outages (and I cannot imagine this was done at the spur of the moment without at least a week’s warning) made us put up a big notice to all users for some time. Have IT capabilities dropped since the 1980s and before?

    Hell, with very little notice, the front page of the site could have had a big banner message “We’ll be going off line” put on it with very little warning. How rude!

    What about the submissions application, rather than half accept a transaction, redirecting (with let’s say 1 hour grace) via a “we’ll be going offline soon so hurry up”)?

    Or even redirecting ANY requests to a static page somewhere else when the power was off that said “Parliament is having a sheduled power outage from X to Y to maintain Z” so we don’t panic over a terror attack?

    It still doesn’t explain ongoing problems.

    Sorry, this doesn’t let IT planning off the hook. At the very least it is grossly inconsiderate compared to acceptable sysadmin courtesy of decades ago.

    Richard…. how would you go about estimating how many senate submissions were lost from private individuals sending them in over the weekend?

    I certainly know there has been no “ooops” message from the transaction saying “can you resend please?”

    Damn that IT business continuity planning document I wrote is looking tasty to my dog!

  11. Yep, at a minimum, there should have been a static page saying that the APH website is offline for maintenance.

    Furthermore, there should have been a notice put up warning people.

    In any case, I would have thought that the APH site was important enough that outages of that length are avoided.

  12. […] Bath on Implications of off-line parliamentary webserverRichard on Implications of off-line parliamentary webserverDave Bath on Oz Parliament website up and stumblingOz Parliament website up and stumbling […]

  13. Christopher Flynn said


    So you are saying that there are no Uninterrupted Power Supplies protecting the senate web services (and who know what else) that would ensure a planned or more importantly – unplanned – power disruption does not interfere with Australian Government service delivery to the public?

    How maldroit of the Australian Government to not have the most basic infrastructure protection? Will we learn that the web server is not in a controlled data center and actually lives under an office desk or with the photo copiers?

  14. Richard said

    I should note that I don’t work in IT and I’m not familiar with their back end systems or DRPs. I just know that the power outage had been scheduled several weeks in advance. I’m an IT client, not a provider.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: