This describes the situation where phases are merged or phase change assessments are not completed.

In particular I want to describe what happens if you mix ‘Acceptance’ with ‘Completion’

How can this happen?

  • Delivery pressures
  • Fixed deadlines for a complex solution that are too short

How to spot this?

  • Ad hoc requirements
  • Requirements defined in draft form and not signed off before handing over to the supplier
  • Supplementary documentation with slightly different titles

How to avoid this?

  • Have a phase change assessment, whereby documentation is reviewed and signatures are gathered regarding readiness to proceed to the next phase

Why ‘Snakes and Ladders’?

  • Because you have mixed phases you are actively working on an earlier phase while attempting to get to sign off on a later dependency
  • It is not uncommon to achieve small milestones but to drop back in another area
  • Working in this crossover can be quite disorientating

But Agile is just this surely?

  • Agile does not normally involve a buyer / supplier relationship involving million pound tenders

Escalation is one type of flow.

Sometimes it helps to think about it in people terms starting at the beginning of the story


One criticism of Windows was folks being encouraged (by websites) to download custom .exe file to Desktop and double click.

In response to this, a wave a security products and some access control changes, put a stop to that.

Some users missed the convenience.

Could this ever happen on Linux / Unix?


Here is an extract from the install instructions for a Google publicised project:

curl -L | bash

Seems the nix community is in too great a hurry to put convenience before security.

I point out some of the reasons why not in the next section.

Internet pipe to Bash – why not?

To suggest such an install procedure, is to ignore many of the security lessons from the past decade.

Possible risks 1: Fat fingered redirect

By advising the user invoke curl with -L flag, the developer is encouraging users to trust any locally coded redirection.

The reason curl advises of redirection is to allow the end user to verify any redirection themselves rather than trusting what redirection is entered at the remote site.

What would happen if a bogus redirect was inserted by mistake, or by a malicious insider? If it only happened for an hour would the company think it important enough to inform the developer population?

Possible risks 2: Shifting sands

Exactly how do you personally know that the code that was there yesterday is the same code as today?

Does it come in a package with checksums and a well understood inbuilt verification of checksum feature?

Can you manually download a verification checksum from a different mirror server, than the actual code download?

Possible risks 3: Compromised server

Compromised servers are always a risk for any internet hosted software.

Hosting code through acceptance in a distribution like Debian or Red Hat, allows a small company to leverage the infrastructure provided.

It also elevates untrusted software somewhat, due to the integration build process, qa review, and hosting infrastructure which such distributions provide.

Bitbucket, Gitorious, Google code and Github offer some minor improvement from self hosting a project yourself.

Then there is Pypi, CPAN, and other convenience collections, which whilst not offering massive assurance, at least mitigate the next problem described.

Possible risks 4: Dns hijack / redirection

Dns cache poisoning is all too common unfortunately.

Whilst this project is getting some backing from Google, it would be unwise to assume that it (and any mirrors?) employ DNSSEC to mitigate Cache poisoning. If they did employ DNSSEC effectively, would that be on the original http endpoint or the redirected http endpoint?

Commentary and other examples:

In fairness to the developers, there are some additional install notes, and in particular there is some hints for Debian / Ubuntu folks that include this line:

sudo npm install -g yeoman

However, those install instructions also suggest at the start, that you should still do an initial pipe bash, in case you had a previous install present.

Doing that initial pipe bash, then switching to more traditional package management techniques, does not mitigate any of the risks described earlier.

It may be that developers are being encouraged to do this sort of hacky curl stuff by seeing this entry from the npm site:

curl | sh

The observant amongst you will notice that there is no -L flag here, so strike off one of those risks listed earlier.

What comes after the pipe symbol ( | )? Does that make any difference from the other example?

That answer is left as an exercise for the reader.

Further examples (added after article first written):

The past 5 years has seen much upheaval in roles and responsibilities within Information Technology.

Business leaders felt that IT was too partitioned (and thus expensive)

The result has been a narrowing of knowledge in individual areas, with a focus now on breadth.

IT - Narrowing and Connecting

Narrowing and Connecting

Phrases such as “The developer is king” reflect this shift.

As with all power shifts, there are upsides and downsides.

Cloud computing was fuelled in part by this shift – that is an upside.

Organisations now have to introduce control processes and checks on developers, some of which it could be argued, are a direct replacement for the partitions between people, in the previous model.

Example: Source code and deployment.

If one person acting alone has full responsibility for source code and deployment, where does that leave the company when the Developer leaves?

Ah, you might argue, but the Developer is a responsible type and pushes to central company controlled repositories and deployment templates.

My question here is “How responsible?”

Surely ‘responsible’ is a sliding scale – some Developers will be more so, some less so.

I was deliberate in using the phrase ‘central company controlled’ when talking about repositories and deployment templates.

Are you a head of IT or company Director? – if so do you know….

  • How granular the access control is on source repositories?
  • How granular the access control is on deployment templates?
  • How many people are sharing the main ‘Owner’ account?
  • The credentials for the ‘Owner’ account?

For the final two points, an argument and counter argument:

But the Head of IT and Directors do not need to access source control, in fact best not have them in there in case they do some damage

Without access to source control, which non-developer have you tasked to act as code keeper on your/company behalf?

This post is providing background for a series of articles – more to follow.

Something important is happening to markup.

The semantic web – that’s xhtml right? About 5 years back that was the only way, but today its modern take is RDFa

What should I beware of then?

Competing standards licensed under FRAND terms.

For those of you not familiar with Frand, it is a pretence. It’s proponents will use the word ‘open’, however there are always caveats, and a ‘boys club’ mentality involved.

You cannot confer Frand to anyone else and that is where the lie is outed.

Ask anyone bearing Frand gifts this question

If the terms are Non-Discriminatory, like you say, then how come I cannot give those rights to the other 50 people in my local business networking club?


So RDFa is an open standard?

Yes. In 2008 the standard reached ‘Recommend status’ and has the backing of the web standards body W3C.

The Wikipedia link above gives a short snippet, but here is a more in-depth primer on RDFa

So if I want to add ‘richness’ to your content and make it easier for search engines to index, use RDFa.

There are at least two alternatives to RDFa being promoted at the moment, however both of the other markup extensions would result in your website content being subject to ‘Terms and Conditions’ defined by their proponents.

The web was never about corporate ownership of tags and markers.

As a company you need to have a strategy for rich markup that includes considerations regarding tag copyrights.

With RDFa, there is no corporate entity or terms and conditions involved, which keeps things easy for that future strategy.

Your existing site probably has been safely created without needing to decide on this, however I suspect your next relaunch will have to decide one way or the other.

Security is a mindset, rather than any single action

My opinion. Swill it around a bit … do you agree?

Layers and lack of control:

A new starter is given access to live servers and asked to code up some simple shell scripts, to automate a bit of hosting setup and/or cron tasks.


Not having deep experience with Ubuntu Servers, the new starter immediately dives in and tries to write scripts that use the extended (non posix) features of the bash login shell.

Bash Arrays or let count=count+1 are examples.

So the task progresses and the bashisms are in.

Another task is taken on and completed again using bash extensions similar to those mentioned above.

Now another administrator is asked to add those tasks to a daemon process or startup script.

But the scripts don’t work!

Hacker Solution: Change the default system shell from /bin/dash to /bin/bash

task1, task2, ..., use in daemon/cron

What is highlighted in Red are the human failings.

The creation of a daemon process / cron should not trigger a policy change on a server.

There are several good reasons why Ubuntu switched to dash as the default shell over 5 years ago.

From my personal point of view, bash is too complex, and is better suited to the task of a ‘login shell’ rather than a ‘system shell’ / ‘process shell’

The more complex a piece of software…

  • the more likely a new security hole can be found
  • the more likely a move of versions will result in breakage

The phrase “Keep it simple stupid” feel appropriate here.

Human failings – how could this happen?

The intricacies of posix standard shell scripts are not the first thing a new Linux System Administrator will learn.

Should they be? Well that depends on whether you see certification as important

Pushing your new Linux System Administrator to become Red Hat (RHCE) certified, is one way of introducing them to standards and the context of operational tasks

Here is an unofficial introduction to the sort of thing that RHCE might expect.

How many years into Linux System Administration should the Administrator know about the Single Unix Specification? It all depends on your organisation and how you wish to operate.

In most organisations, the implementation of a new daemon / startup task could never trigger a policy change at the server level.

Some organisations that have no firm security policies and no configuration management documentation, might well fail to stop the actions accompanied by red indicators in my diagram.

This is an organisational / human failing and has nothing to do with the specifics of the technology. It could interchangeably be Linux, or Solaris, or Windows that lacked the human controls to prevent such a change.

Bash is secure – it is rare that there are any new exploits:

Bash changes infrequently (which is good), and most exploits in the last 10 years have been around insecure temporary file creation.

That does not mean that using a non-interactive shell (/bin/dash) is unnecessary, if you need interactive features and feel lost without non-posix bashisms, then do use bash as your login shell.

From experience, the folks who use the more intricate / esoteric features of bash, are usually lacking in knowledge of sed / awk / python.

Using bash to do serious scripting (string processing in particular) is not too different than, using a hammer to knock in a screw. It’ll work, but there are better tools for the job.

Links and Further Reading:

A Directive and Implementation Steps:

Directive & Implementation Steps

Directive: steps

… and filling in some steps …

Directive & Two Labelled Steps

Now at this point there seems to be a plan, and at least the first couple of implementation steps.

What unfortunately can happen in moving from Directive to Steps, is that, the overall goal can become less visible.

So we rush ahead and implement the steps, and provided the steps are ticked off, we might think the job is done.

It is easy to implement a load balancer – they are available from many providers who are effectively reselling you a Zeus / Riverbed “Application Delivery Controller” or similar.

The Application Delivery Controller (ADC) will do a job, but it is only as intelligent as the configuration you ask for.

“But if I buy a Layer 7 rather than a Layer 4 then that is better right?”

It depends on what you want – revisit your goal (at the top) and try to avoid being distracted by sales pitches and technobabble.

If your goal is mostly to have High Availability (HA) for http, then you might well be able to ignore the extra features a Layer 7 sales bod might be trying to sell.

Do you really need ftp to be distributed by a load balancer? … you decide.

There is an overview of a few of the common algorithms in the short article here

I include an extract below:

  1. Round Robin
  2. Weighted Round Robin
  3. Least Connections
  4. Least Response Time

It is easy to make the wrong choice from the above, and miss your original directive.

I repeat (and expand) the original directive next

High Availability under all circumstances

That goal is very different than “High Availability under normal circumstances”

An infrastructure / network consultant can help you design for each.

My personal feeling (based in part on experience) is that option 4, “Least Response Time”, might help you achieve a goal of “High availability under normal circumstances”, however it might actually hinder you, if your goal is “High availability under all circumstances”

A reminder about the main point of this article – focus on the original goal.

Implementing an individual step (load balancer), might make you feel like you are moving towards your goal through action, but do keep revisiting your original directive to make sure.

If you do decide to use an infrastructure / network consultant, then have a quick read of commonly available industry solutions here ahead of the consultant session.

Another link which has brief descriptions of Layer 4 and Layer 7 here

Next Page »