security


One criticism of Windows was folks being encouraged (by websites) to download custom .exe file to Desktop and double click.

In response to this, a wave a security products and some access control changes, put a stop to that.

Some users missed the convenience.

Could this ever happen on Linux / Unix?

Yes!

Here is an extract from the install instructions for a Google publicised project:

curl -L get.yeoman.io | bash

Seems the nix community is in too great a hurry to put convenience before security.

I point out some of the reasons why not in the next section.

Internet pipe to Bash – why not?

To suggest such an install procedure, is to ignore many of the security lessons from the past decade.

Possible risks 1: Fat fingered redirect

By advising the user invoke curl with -L flag, the developer is encouraging users to trust any locally coded redirection.

The reason curl advises of redirection is to allow the end user to verify any redirection themselves rather than trusting what redirection is entered at the remote site.

What would happen if a bogus redirect was inserted by mistake, or by a malicious insider? If it only happened for an hour would the company think it important enough to inform the developer population?

Possible risks 2: Shifting sands

Exactly how do you personally know that the code that was there yesterday is the same code as today?

Does it come in a package with checksums and a well understood inbuilt verification of checksum feature?

Can you manually download a verification checksum from a different mirror server, than the actual code download?

Possible risks 3: Compromised server

Compromised servers are always a risk for any internet hosted software.

Hosting code through acceptance in a distribution like Debian or Red Hat, allows a small company to leverage the infrastructure provided.

It also elevates untrusted software somewhat, due to the integration build process, qa review, and hosting infrastructure which such distributions provide.

Bitbucket, Gitorious, Google code and Github offer some minor improvement from self hosting a project yourself.

Then there is Pypi, CPAN, and other convenience collections, which whilst not offering massive assurance, at least mitigate the next problem described.

Possible risks 4: Dns hijack / redirection

Dns cache poisoning is all too common unfortunately.

Whilst this project is getting some backing from Google, it would be unwise to assume that it (and any mirrors?) employ DNSSEC to mitigate Cache poisoning. If they did employ DNSSEC effectively, would that be on the original http endpoint or the redirected http endpoint?

Commentary and other examples:

In fairness to the developers, there are some additional install notes, and in particular there is some hints for Debian / Ubuntu folks that include this line:

sudo npm install -g yeoman

However, those install instructions also suggest at the start, that you should still do an initial pipe bash, in case you had a previous install present.

Doing that initial pipe bash, then switching to more traditional package management techniques, does not mitigate any of the risks described earlier.

It may be that developers are being encouraged to do this sort of hacky curl stuff by seeing this entry from the npm site:

curl https://npmjs.org/install.sh | sh

The observant amongst you will notice that there is no -L flag here, so strike off one of those risks listed earlier.

What comes after the pipe symbol ( | )? Does that make any difference from the other example?

That answer is left as an exercise for the reader.

Further examples (added after article first written):

Chkrootkit and other tools that scan for rootkits sometimes report a python related ‘.path’ file as suspect.

Example:

/usr/lib/pymodules/python2.6/.path

The script/binary responsible for creating that file is /usr/sbin/update-python-modules

from the Debian & Ubuntu package python-support

code extract from /usr/sbin/update-python-modules

There is no harm in understanding how to adapt chkrootkit or alternatives to ignore a list of locally recognised false positives, however some might consider this ‘false positive’ a bug.

The past 5 years has seen much upheaval in roles and responsibilities within Information Technology.

Business leaders felt that IT was too partitioned (and thus expensive)

The result has been a narrowing of knowledge in individual areas, with a focus now on breadth.

IT - Narrowing and Connecting

Narrowing and Connecting

Phrases such as “The developer is king” reflect this shift.

As with all power shifts, there are upsides and downsides.

Cloud computing was fuelled in part by this shift – that is an upside.

Organisations now have to introduce control processes and checks on developers, some of which it could be argued, are a direct replacement for the partitions between people, in the previous model.

Example: Source code and deployment.

If one person acting alone has full responsibility for source code and deployment, where does that leave the company when the Developer leaves?

Ah, you might argue, but the Developer is a responsible type and pushes to central company controlled repositories and deployment templates.

My question here is “How responsible?”

Surely ‘responsible’ is a sliding scale – some Developers will be more so, some less so.

I was deliberate in using the phrase ‘central company controlled’ when talking about repositories and deployment templates.

Are you a head of IT or company Director? – if so do you know….

  • How granular the access control is on source repositories?
  • How granular the access control is on deployment templates?
  • How many people are sharing the main ‘Owner’ account?
  • The credentials for the ‘Owner’ account?

For the final two points, an argument and counter argument:

But the Head of IT and Directors do not need to access source control, in fact best not have them in there in case they do some damage

Without access to source control, which non-developer have you tasked to act as code keeper on your/company behalf?

This post is providing background for a series of articles – more to follow.

Security is a mindset, rather than any single action

My opinion. Swill it around a bit … do you agree?

Layers and lack of control:

A new starter is given access to live servers and asked to code up some simple shell scripts, to automate a bit of hosting setup and/or cron tasks.

#!/bin/bash

Not having deep experience with Ubuntu Servers, the new starter immediately dives in and tries to write scripts that use the extended (non posix) features of the bash login shell.

Bash Arrays or let count=count+1 are examples.

So the task progresses and the bashisms are in.

Another task is taken on and completed again using bash extensions similar to those mentioned above.

Now another administrator is asked to add those tasks to a daemon process or startup script.

But the scripts don’t work!

Hacker Solution: Change the default system shell from /bin/dash to /bin/bash

task1, task2, ..., use in daemon/cron

What is highlighted in Red are the human failings.

The creation of a daemon process / cron should not trigger a policy change on a server.

There are several good reasons why Ubuntu switched to dash as the default shell over 5 years ago.

From my personal point of view, bash is too complex, and is better suited to the task of a ‘login shell’ rather than a ‘system shell’ / ‘process shell’

The more complex a piece of software…

  • the more likely a new security hole can be found
  • the more likely a move of versions will result in breakage

The phrase “Keep it simple stupid” feel appropriate here.

Human failings – how could this happen?

The intricacies of posix standard shell scripts are not the first thing a new Linux System Administrator will learn.

Should they be? Well that depends on whether you see certification as important

Pushing your new Linux System Administrator to become Red Hat (RHCE) certified, is one way of introducing them to standards and the context of operational tasks

Here is an unofficial introduction to the sort of thing that RHCE might expect.

How many years into Linux System Administration should the Administrator know about the Single Unix Specification? It all depends on your organisation and how you wish to operate.

In most organisations, the implementation of a new daemon / startup task could never trigger a policy change at the server level.

Some organisations that have no firm security policies and no configuration management documentation, might well fail to stop the actions accompanied by red indicators in my diagram.

This is an organisational / human failing and has nothing to do with the specifics of the technology. It could interchangeably be Linux, or Solaris, or Windows that lacked the human controls to prevent such a change.

Bash is secure – it is rare that there are any new exploits:

Bash changes infrequently (which is good), and most exploits in the last 10 years have been around insecure temporary file creation.

That does not mean that using a non-interactive shell (/bin/dash) is unnecessary, if you need interactive features and feel lost without non-posix bashisms, then do use bash as your login shell.

From experience, the folks who use the more intricate / esoteric features of bash, are usually lacking in knowledge of sed / awk / python.

Using bash to do serious scripting (string processing in particular) is not too different than, using a hammer to knock in a screw. It’ll work, but there are better tools for the job.

Links and Further Reading:

Having recently taken delivery of a new VPS, I logged on and immediately set about, securing the setup.

Changing the root password sounds obvious, but there are alternatives:

  • Blocking root access via ssh.
  • Turning off password authentication and using known keys only.

Doing either of the above might have you thinking that the strong password the datacentre setup for you can now stay – but wait!

VPS, cloud servers, and some dedicated servers are often provisioned from templates.

Those templates may have set the password early in the build process, before secure hashing was fully configured.

 

At a glance – is the root password in /etc/shadow sufficiently hashed?

Here is an example which shares some characteristics with what I found on my newly provisioned server:

root:Npge08pfz4wuk:15000:0:730:7:::

If you are accustomed to working with /etc/passwd and /etc/shadow, you will have spotted the problem already …

The second field is way too short!

It should instead look something like this:

root:$6$OBEzW/iiKRe/ww$vfnfEFg41l1dK4zE4YM9PiRKs7ic5lvg1WgFWgi.VF0O/MYCZPELqedCmSybFQ5.0twYbc1fU6VnXqdACqELj0:15000:0:730:7:::

The second field beginning $6 indicates that the password has been hashed using SHA2 (512), often abbreviated to sha512

If you just want to printout the shadow password entries for those users that can login then use this command:

egrep -v '.*:\*|:\!' /etc/shadow | awk -F: '{print $2}'

and just double-check that all lines output begin $6

If not, then issue a passwd command and input a new password for the problem user (root or otherwise)

Providing your system is not outdated the proper sha512 hashing should be in place. When you regenerate the password with passwd, you should now see the second field in /etc/shadow a lot wider, and no longer a security issue.

 

The third field in /etc/shadow – pre-populated for you during build:

Days since Jan 1, 1970 that password was last changed

That third field will probably be set to 15000 or a number of that magnitude.

Too large a number would see ‘in future’ complaints being registered in a log file in /var/log/

 

Links and Further reading:

The link below gives an example of using mkpasswd to fix things.

However do be aware that echoing passwords to pipe into a secondary command is not considered ideal, due to the process details being visible in plain text to other users who might be running ‘top’

Python can also be used to generate replacement passwords – although again be careful what might be viewable by other users whilst your process is running:

python -c "import crypt, getpass, pwd; print crypt.crypt('passwordunhashed', '\$6\$SALTsalt\$')"

Talking yesterday with some folks about cloud computing, I mentioned Salesforce.com as an early pioneer.

Formed in 1999, Salesforce.com “is best known for its Customer Relationship Management (CRM) products”

Above Quote: Wikipedia (2011)

Phrases like the following, are how I sometimes talk about cloud computing:

Rather than buy software with a large up front cost, install it locally, then go back to the vendor and pay for a new version 18 months later…

What I am doing here is marketing cloud computing on the basis of ‘continuous upgrades’

With that new model comes some caveats and some new thinking.

Funding cloud computing development:

Making up a company name, let us call the vendor “Cloudgang” (No relation to any real company intended)

How will Cloudgang developers be paid?

  1. Software rental (Monthly / Quarterly “Pay as you go”)
  2. One time fee
  3. Advertising sponsored
  4. Other model

(1) Above is easy to explain as there is an analogy in Mobile Phones

(3) Above is easier to explain, now that lots of folks, have experience of the personal (free) editions of GMail and Hotmail

Service level agreements (SLA) and cloud computing:

This is a huge area and I will not cover it in detail, however I cover Google offering briefly.

The SLA for “Google Apps for Business” is here.

Government cloud contracts might well be covered under an enhanced SLA, depending on the contract negotiations involved.

Some Extracts:

  • “99.9% uptime guarantee SLA and 24×7 support”
  • “Phone support for critical issues”

Certification and cloud computing:

This is a huge area and I will not cover it in detail, however recent events have raised an important question.

Traditionally government is used to going through a certification process, buying the software product, end of story.

Cloud computing may require an adaptation of the certification process, that may depend some on, whether the funding model is (1) or (2) above.

Re-certification, and how often that occurs, is going to have to form a part of the contractual negotiations.

Here is what David McClure from US Government (GSA) had to say recently:

FISMA recognize that products evolve and that recertification is part of the process

To provide an example I will use Healthcare (Critical Care) and stretch a standard (CCMDS), to be a certification.

So in this fictional procurement process, CCMDS is a certification process with an external certification body.

Your local hospital signs a 5 year cloud computing contract with Cloudgang for their new Critical Care product ICGang.

Part of that process might require a CCMDS re-certification every 2 years.

Vendor enhancements, continuous development:

As a purchaser of service, you need to consider whether this sort of enhancements model, is one which you favour.

Going back to my fictional Healthcare procurement:

  • If you and 5 other hospitals contract for ICGang, would you want the benefits of enhancements the other 4 hospitals suggest, as they happen?
  • Do you instead wish the product to remain static, and ‘conform to specification’ for the entire contract term?

If you choose the first option benefits of enhancements, then:

  • Would you require an ‘opt out of further changes‘ process?
  • Do you see the system functioning as a sort of ‘optional updates’ setup, where you are prompted for each major ‘enhancement’?
  • Should you avoid a “One time fee” sort of contract, and instead prefer a Monthly / Quarterly payment setup

In the last point in that list, I have returned to the subject of funding.

Developers need paying. Fact.

If you choose the benefits of enhancements option then, that ongoing development work, will need to be funded.

If the contract is a one time fee, then the vendor will have to ‘factor in’ an estimate, of the number of hours of development, to deliver under that benefits of enhancements setup.

An alternative arrangement conforming to specification might seem attractive, however your planning is then really about a series of ‘one time fee’ contract negotiations, accepting that your software is unenhanced for the term of the contract.

Choosing the right sort of arrangement, might well depend on the organisation doing the purchasing, and the business area:

  • Is the organisation dynamic and capable of managing a more interactive relationship with the vendor?
  • Is the business area rapidly developing, and seems a more natural fit with a “benefits of enhancement” style agreement?

Links and further reading:

There are many articles about securing /tmp
by having it on a separate disk and mounting it noexec.

When this is done (sometimes a datacentre might do it for security), on debian this can cause an issue.

When apt is preparing to install it will often try and extract package contents to a temporary directory.

By default /tmp will be used unless you tell your system otherwise.

So the advice I found in the comments of another article mentioned the following:

APT::ExtractTemplates::TempDir "/var/tmp";

…but where to put it?

Here is how I worked it.

Apt needs telling that you want to use somewhere other than /tmp as the place where it should extract package files during installation.

You want to set this as a permanent setting on your server.

Preferences for how apt should behave on a Debian Lenny system are in/etc/apt/apt.conf.d/ and so create a file named 50extracttemplates in that directory that looks like the following:

APT
{
  ExtractTemplates
  {
	TempDir "/var/local/tmp";
  };
};

Here I have set it to /var/local/tmp as my datacentre, in order to be super secure 🙂 also sets /var/tmp as noexec.

( Here is a copy of my 50extracttemplates file if you prefer to download rather than copy and paste )

Change what I have in my file to /var/tmp if that is okay for your setup, or if you want to stay with /var/local/tmp then ensure you have created that directory and given it the appropriate permissions.

To help identify where this error/restriction might appear on your system I now give two sample outputs of a package install of bind9. The first illustrates the problem. The second shows different behaviour now that apt is happier extracting things.

Problem:

The following NEW packages will be installed
  bind9
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 0B/255kB of archives.
After this operation, 778kB of additional disk space will be used.
Preconfiguring packages ...
Can't exec "/var/tmp/bind9.config.326141": Permission denied at /usr/share/perl/5.10/IPC/Open3.pm line 168.
open2: exec of /var/tmp/bind9.config.326141 configure  failed at /usr/share/perl5/Debconf/ConfModule.pm line 59
bind9 failed to preconfigure, with exit status 255

…and now with the fix in place…

The following NEW packages will be installed
  bind9
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 0B/255kB of archives.
After this operation, 778kB of additional disk space will be used.
Preconfiguring packages ...
Selecting previously deselected package bind9.

which looks a lot healthier.