One criticism of Windows was folks being encouraged (by websites) to download custom .exe file to Desktop and double click.

In response to this, a wave a security products and some access control changes, put a stop to that.

Some users missed the convenience.

Could this ever happen on Linux / Unix?

Yes!

Here is an extract from the install instructions for a Google publicised project:

curl -L get.yeoman.io | bash

Seems the nix community is in too great a hurry to put convenience before security.

I point out some of the reasons why not in the next section.

Internet pipe to Bash – why not?

To suggest such an install procedure, is to ignore many of the security lessons from the past decade.

Possible risks 1: Fat fingered redirect

By advising the user invoke curl with -L flag, the developer is encouraging users to trust any locally coded redirection.

The reason curl advises of redirection is to allow the end user to verify any redirection themselves rather than trusting what redirection is entered at the remote site.

What would happen if a bogus redirect was inserted by mistake, or by a malicious insider? If it only happened for an hour would the company think it important enough to inform the developer population?

Possible risks 2: Shifting sands

Exactly how do you personally know that the code that was there yesterday is the same code as today?

Does it come in a package with checksums and a well understood inbuilt verification of checksum feature?

Can you manually download a verification checksum from a different mirror server, than the actual code download?

Possible risks 3: Compromised server

Compromised servers are always a risk for any internet hosted software.

Hosting code through acceptance in a distribution like Debian or Red Hat, allows a small company to leverage the infrastructure provided.

It also elevates untrusted software somewhat, due to the integration build process, qa review, and hosting infrastructure which such distributions provide.

Bitbucket, Gitorious, Google code and Github offer some minor improvement from self hosting a project yourself.

Then there is Pypi, CPAN, and other convenience collections, which whilst not offering massive assurance, at least mitigate the next problem described.

Possible risks 4: Dns hijack / redirection

Dns cache poisoning is all too common unfortunately.

Whilst this project is getting some backing from Google, it would be unwise to assume that it (and any mirrors?) employ DNSSEC to mitigate Cache poisoning. If they did employ DNSSEC effectively, would that be on the original http endpoint or the redirected http endpoint?

Commentary and other examples:

In fairness to the developers, there are some additional install notes, and in particular there is some hints for Debian / Ubuntu folks that include this line:

sudo npm install -g yeoman

However, those install instructions also suggest at the start, that you should still do an initial pipe bash, in case you had a previous install present.

Doing that initial pipe bash, then switching to more traditional package management techniques, does not mitigate any of the risks described earlier.

It may be that developers are being encouraged to do this sort of hacky curl stuff by seeing this entry from the npm site:

curl https://npmjs.org/install.sh | sh

The observant amongst you will notice that there is no -L flag here, so strike off one of those risks listed earlier.

What comes after the pipe symbol ( | )? Does that make any difference from the other example?

That answer is left as an exercise for the reader.

Further examples (added after article first written):

The past 5 years has seen much upheaval in roles and responsibilities within Information Technology.

Business leaders felt that IT was too partitioned (and thus expensive)

The result has been a narrowing of knowledge in individual areas, with a focus now on breadth.

IT - Narrowing and Connecting

Narrowing and Connecting

Phrases such as “The developer is king” reflect this shift.

As with all power shifts, there are upsides and downsides.

Cloud computing was fuelled in part by this shift – that is an upside.

Organisations now have to introduce control processes and checks on developers, some of which it could be argued, are a direct replacement for the partitions between people, in the previous model.

Example: Source code and deployment.

If one person acting alone has full responsibility for source code and deployment, where does that leave the company when the Developer leaves?

Ah, you might argue, but the Developer is a responsible type and pushes to central company controlled repositories and deployment templates.

My question here is “How responsible?”

Surely ‘responsible’ is a sliding scale – some Developers will be more so, some less so.

I was deliberate in using the phrase ‘central company controlled’ when talking about repositories and deployment templates.

Are you a head of IT or company Director? – if so do you know….

  • How granular the access control is on source repositories?
  • How granular the access control is on deployment templates?
  • How many people are sharing the main ‘Owner’ account?
  • The credentials for the ‘Owner’ account?

For the final two points, an argument and counter argument:

But the Head of IT and Directors do not need to access source control, in fact best not have them in there in case they do some damage

Without access to source control, which non-developer have you tasked to act as code keeper on your/company behalf?

This post is providing background for a series of articles – more to follow.

Image

For this particular project the setup.py contained the following line:

      install_requires=['bottle','python-memcached>=1.45'],

Note: if you use ‘memcache’ or ‘memcached’ instead of what is written, you will probably get a message like:

Couldn’t find index page for ‘memcache’ (maybe misspelled?)

remote: Reading http://pypi.python.org/simple/
remote: No local packages or download links found for memcache>=1.45
remote: Best match: None

GNU/Linux systems now have a new major Kernel version – 3.

Best way to test / report kernel in Python?

Example script – hosted on Bitbucket:

The Python at the link above, will report what it knows about your running kernel.

Example output:

Example output for Linux kernel 3.x

Python reports Linux 3

It is always good to know how powerful a new cloud server is, when you take delivery.

There are a range of benchmarks available, however for pure cpu, I like to use some Mathematics.

Programs that tax a processor for several hours, often will be compiled from source.

( Makes sense as if the thing is cpu intensive,
then it really is important for it to be tailored more closely,
to the features available on the underlying host )

Benchmarking with fabric example – hosted on Bitbucket:

The fabric file at the link above, will download the code to your server and run the benchmark.

The image below (click to resize fullscreen) shows the preparatory steps in action as witnessed through the fabric call on the local machine.

benchmarking server from fabric

fabric to benchmark

The Mathematics will run for anywhere between 1 and 15 hours, depending on the raw (single core) processing power on the server being benchmarked.

Fabric availability as standard package

Because Fabric relies on Python 2.5 or newer, you will have to do your own thinking, for CentOS 5 and RedHat 5 (which have python 2.4 by default)

Here is a useful link for Python 2.5 and fabric on rpm based ‘5’ series systems.

Here is a fabric rpm in fedora that assumes a Python version 2.7 or newer.

Having recently taken delivery of a new VPS, I logged on and immediately set about, securing the setup.

Changing the root password sounds obvious, but there are alternatives:

  • Blocking root access via ssh.
  • Turning off password authentication and using known keys only.

Doing either of the above might have you thinking that the strong password the datacentre setup for you can now stay – but wait!

VPS, cloud servers, and some dedicated servers are often provisioned from templates.

Those templates may have set the password early in the build process, before secure hashing was fully configured.

 

At a glance – is the root password in /etc/shadow sufficiently hashed?

Here is an example which shares some characteristics with what I found on my newly provisioned server:

root:Npge08pfz4wuk:15000:0:730:7:::

If you are accustomed to working with /etc/passwd and /etc/shadow, you will have spotted the problem already …

The second field is way too short!

It should instead look something like this:

root:$6$OBEzW/iiKRe/ww$vfnfEFg41l1dK4zE4YM9PiRKs7ic5lvg1WgFWgi.VF0O/MYCZPELqedCmSybFQ5.0twYbc1fU6VnXqdACqELj0:15000:0:730:7:::

The second field beginning $6 indicates that the password has been hashed using SHA2 (512), often abbreviated to sha512

If you just want to printout the shadow password entries for those users that can login then use this command:

egrep -v '.*:\*|:\!' /etc/shadow | awk -F: '{print $2}'

and just double-check that all lines output begin $6

If not, then issue a passwd command and input a new password for the problem user (root or otherwise)

Providing your system is not outdated the proper sha512 hashing should be in place. When you regenerate the password with passwd, you should now see the second field in /etc/shadow a lot wider, and no longer a security issue.

 

The third field in /etc/shadow – pre-populated for you during build:

Days since Jan 1, 1970 that password was last changed

That third field will probably be set to 15000 or a number of that magnitude.

Too large a number would see ‘in future’ complaints being registered in a log file in /var/log/

 

Links and Further reading:

The link below gives an example of using mkpasswd to fix things.

However do be aware that echoing passwords to pipe into a secondary command is not considered ideal, due to the process details being visible in plain text to other users who might be running ‘top’

Python can also be used to generate replacement passwords – although again be careful what might be viewable by other users whilst your process is running:

python -c "import crypt, getpass, pwd; print crypt.crypt('passwordunhashed', '\$6\$SALTsalt\$')"

/proc/sys/kernel/perf_event_paranoid is a Linux kernel flag with settings as follows:

  1. disallow cpu events for unpriv
  2. disallow kernel profiling for unpriv

There are also values 0 (disallow raw tracepoint access for unpriv) and -1 (not paranoid at all)

For virtual machines in VirtualBox 3.1, it is recommended that, if your machine hardware supports ‘Performance Events’, then you block kernel profiling access to VirtualBox user (unprivileged).

perf_event_paranoid gets 2

set perf_event_paranoid to 2

(Note: You must be root / privileged user to make the change permanently, so su or sudo as appropriate)

Not all VirtualBox users will receive a warning message from VirtualBox 3.1, it depends on how new the processor is in your machine. Newer the processor, more likely it will support ‘Performance Events’ / PEBS.

Query dmesg - see if 'Performance Events' supported in hardware

grep of dmesg

Jargon 1 – PEBS:

Precise Event-Based Sampling

Jargon 2 – IBS:

Instruction-based sampling (AMD specific), an idea similar to PEBS

Note: Some versions of VirtualBox 3.1 incorrectly refer to perf_counter_paranoid (mistake) instead of perf_event_paranoid. So if you see a message suggesting you should:

echo 2 > /proc/sys/kernel/perf_counter_paranoid

then instead look at the image I provided and use the correct /proc/sys/kernel/perf_event_paranoid

Links and Further Reading:

For now it is important to not use hardware-aided performance counters on the host while running VMs with VirtualBox, therefore the warning.

Source: Frank Mehnert in VirtualBox forums