linux skills

netstat grepping

grepping listening ports

Here (above) is an example of grepping the full list of ports shown as LISTEN

My example concentrates on port 80 (apache) or alternatives. Amend to fit your requirements.
Source at Bitbucket below:


Chkrootkit and other tools that scan for rootkits sometimes report a python related ‘.path’ file as suspect.



The script/binary responsible for creating that file is /usr/sbin/update-python-modules

from the Debian & Ubuntu package python-support

code extract from /usr/sbin/update-python-modules

There is no harm in understanding how to adapt chkrootkit or alternatives to ignore a list of locally recognised false positives, however some might consider this ‘false positive’ a bug.

Security is a mindset, rather than any single action

My opinion. Swill it around a bit … do you agree?

Layers and lack of control:

A new starter is given access to live servers and asked to code up some simple shell scripts, to automate a bit of hosting setup and/or cron tasks.


Not having deep experience with Ubuntu Servers, the new starter immediately dives in and tries to write scripts that use the extended (non posix) features of the bash login shell.

Bash Arrays or let count=count+1 are examples.

So the task progresses and the bashisms are in.

Another task is taken on and completed again using bash extensions similar to those mentioned above.

Now another administrator is asked to add those tasks to a daemon process or startup script.

But the scripts don’t work!

Hacker Solution: Change the default system shell from /bin/dash to /bin/bash

task1, task2, ..., use in daemon/cron

What is highlighted in Red are the human failings.

The creation of a daemon process / cron should not trigger a policy change on a server.

There are several good reasons why Ubuntu switched to dash as the default shell over 5 years ago.

From my personal point of view, bash is too complex, and is better suited to the task of a ‘login shell’ rather than a ‘system shell’ / ‘process shell’

The more complex a piece of software…

  • the more likely a new security hole can be found
  • the more likely a move of versions will result in breakage

The phrase “Keep it simple stupid” feel appropriate here.

Human failings – how could this happen?

The intricacies of posix standard shell scripts are not the first thing a new Linux System Administrator will learn.

Should they be? Well that depends on whether you see certification as important

Pushing your new Linux System Administrator to become Red Hat (RHCE) certified, is one way of introducing them to standards and the context of operational tasks

Here is an unofficial introduction to the sort of thing that RHCE might expect.

How many years into Linux System Administration should the Administrator know about the Single Unix Specification? It all depends on your organisation and how you wish to operate.

In most organisations, the implementation of a new daemon / startup task could never trigger a policy change at the server level.

Some organisations that have no firm security policies and no configuration management documentation, might well fail to stop the actions accompanied by red indicators in my diagram.

This is an organisational / human failing and has nothing to do with the specifics of the technology. It could interchangeably be Linux, or Solaris, or Windows that lacked the human controls to prevent such a change.

Bash is secure – it is rare that there are any new exploits:

Bash changes infrequently (which is good), and most exploits in the last 10 years have been around insecure temporary file creation.

That does not mean that using a non-interactive shell (/bin/dash) is unnecessary, if you need interactive features and feel lost without non-posix bashisms, then do use bash as your login shell.

From experience, the folks who use the more intricate / esoteric features of bash, are usually lacking in knowledge of sed / awk / python.

Using bash to do serious scripting (string processing in particular) is not too different than, using a hammer to knock in a screw. It’ll work, but there are better tools for the job.

Links and Further Reading:

Having recently taken delivery of a new VPS, I logged on and immediately set about, securing the setup.

Changing the root password sounds obvious, but there are alternatives:

  • Blocking root access via ssh.
  • Turning off password authentication and using known keys only.

Doing either of the above might have you thinking that the strong password the datacentre setup for you can now stay – but wait!

VPS, cloud servers, and some dedicated servers are often provisioned from templates.

Those templates may have set the password early in the build process, before secure hashing was fully configured.


At a glance – is the root password in /etc/shadow sufficiently hashed?

Here is an example which shares some characteristics with what I found on my newly provisioned server:


If you are accustomed to working with /etc/passwd and /etc/shadow, you will have spotted the problem already …

The second field is way too short!

It should instead look something like this:


The second field beginning $6 indicates that the password has been hashed using SHA2 (512), often abbreviated to sha512

If you just want to printout the shadow password entries for those users that can login then use this command:

egrep -v '.*:\*|:\!' /etc/shadow | awk -F: '{print $2}'

and just double-check that all lines output begin $6

If not, then issue a passwd command and input a new password for the problem user (root or otherwise)

Providing your system is not outdated the proper sha512 hashing should be in place. When you regenerate the password with passwd, you should now see the second field in /etc/shadow a lot wider, and no longer a security issue.


The third field in /etc/shadow – pre-populated for you during build:

Days since Jan 1, 1970 that password was last changed

That third field will probably be set to 15000 or a number of that magnitude.

Too large a number would see ‘in future’ complaints being registered in a log file in /var/log/


Links and Further reading:

The link below gives an example of using mkpasswd to fix things.

However do be aware that echoing passwords to pipe into a secondary command is not considered ideal, due to the process details being visible in plain text to other users who might be running ‘top’

Python can also be used to generate replacement passwords – although again be careful what might be viewable by other users whilst your process is running:

python -c "import crypt, getpass, pwd; print crypt.crypt('passwordunhashed', '\$6\$SALTsalt\$')"

/proc/sys/kernel/perf_event_paranoid is a Linux kernel flag with settings as follows:

  1. disallow cpu events for unpriv
  2. disallow kernel profiling for unpriv

There are also values 0 (disallow raw tracepoint access for unpriv) and -1 (not paranoid at all)

For virtual machines in VirtualBox 3.1, it is recommended that, if your machine hardware supports ‘Performance Events’, then you block kernel profiling access to VirtualBox user (unprivileged).

perf_event_paranoid gets 2

set perf_event_paranoid to 2

(Note: You must be root / privileged user to make the change permanently, so su or sudo as appropriate)

Not all VirtualBox users will receive a warning message from VirtualBox 3.1, it depends on how new the processor is in your machine. Newer the processor, more likely it will support ‘Performance Events’ / PEBS.

Query dmesg - see if 'Performance Events' supported in hardware

grep of dmesg

Jargon 1 – PEBS:

Precise Event-Based Sampling

Jargon 2 – IBS:

Instruction-based sampling (AMD specific), an idea similar to PEBS

Note: Some versions of VirtualBox 3.1 incorrectly refer to perf_counter_paranoid (mistake) instead of perf_event_paranoid. So if you see a message suggesting you should:

echo 2 > /proc/sys/kernel/perf_counter_paranoid

then instead look at the image I provided and use the correct /proc/sys/kernel/perf_event_paranoid

Links and Further Reading:

For now it is important to not use hardware-aided performance counters on the host while running VMs with VirtualBox, therefore the warning.

Source: Frank Mehnert in VirtualBox forums

In a couple of years there will be whole teams who have never run a benchmark on bare metal.

With that in mind, I thought it worth doing some personal and standard benchmarks, so that future cloud provision can be judged against.

Here is my table for vps, dedicated, desktop, laptop which gives me somewhere to slot future cloud instances.

benchmark table

If you have access to bare metal today, then it might be worth spending an hour now doing a table.

Some of my benchmarks are very particular to Python and/or Mathematics using Gmp library, however unixbench score (highlighted) is a standard score, that you can use in your own table as a general guide.

System Administrators sometimes share their timesaving tips, here I share two.

alias baksysconfig="cd /bak/this/sysconfig/; \
/bin/echo '*shadow* *secret* etc/ssh/ssh*_key etc/.*secret* etc/*/*secret* ' | \
/usr/bin/tr ' ' '\n' | \
tar --wildcards --checkpoint=100 --exclude '*shadow*' -X - -cf etc__`/bin/date +%Y%m%dT%H%M`U.tar /etc; \
cd -"

… and …

alias infolist="(cd /var/lib/dpkg/info/;find . -type f -name '*.list' -printf '%f\n' | awk -F'.list' '{print \$1}')"

You probably already have a local shell startup script, that sets your aliases, and includes your own handy entries.

The –checkpoint flag for tar is a great manual check that:

  • Something actually gets put in the tar file
  • Your whole filesystem is not getting shoved into the tar file
Copying and pasting Linux command lines from the web, it is easy to miss a bit of the command.
Those two points I listed might sound really obvious, but when working in space limited VPS environments, a typo that results in the whole filesystem being tarred, is not a good situation 😐

Next Page »