There are many choices for enterprise monitoring, you may already have server level monitoring in place.

Where an individual process requires inspection over time, a simple monitoring script is an option

One of the trade-offs that you must make when monitoring in detail is sample frequency.

Too frequent and you might cause more issues (large logs or other cruft).

Not frequent enough and you might miss the very thing you are trying to observe.

A basic Python script that logs every 2 minutes to sqlite can be found at the link shown next.

On a server with Python 2 available, and equipped with the process id you wish to monitor (from ps output), you might invoke the script as follows:

python ./pidmem_sample1.py 1234

(where 1234 is the process id of you long running process)

The script will run for 3 months or so, and samples at two minute intervals.

Killing the process being monitored will see the monitoring script exit also.

netstat grepping

grepping listening ports

Here (above) is an example of grepping the full list of ports shown as LISTEN

My example concentrates on port 80 (apache) or alternatives. Amend to fit your requirements.
Source at Bitbucket below:

Chkrootkit and other tools that scan for rootkits sometimes report a python related ‘.path’ file as suspect.

Example:

/usr/lib/pymodules/python2.6/.path

The script/binary responsible for creating that file is /usr/sbin/update-python-modules

from the Debian & Ubuntu package python-support

code extract from /usr/sbin/update-python-modules

There is no harm in understanding how to adapt chkrootkit or alternatives to ignore a list of locally recognised false positives, however some might consider this ‘false positive’ a bug.

GNU/Linux systems now have a new major Kernel version – 3.

Best way to test / report kernel in Python?

Example script – hosted on Bitbucket:

The Python at the link above, will report what it knows about your running kernel.

Example output:

Example output for Linux kernel 3.x

Python reports Linux 3

It is always good to know how powerful a new cloud server is, when you take delivery.

There are a range of benchmarks available, however for pure cpu, I like to use some Mathematics.

Programs that tax a processor for several hours, often will be compiled from source.

( Makes sense as if the thing is cpu intensive,
then it really is important for it to be tailored more closely,
to the features available on the underlying host )

Benchmarking with fabric example – hosted on Bitbucket:

The fabric file at the link above, will download the code to your server and run the benchmark.

The image below (click to resize fullscreen) shows the preparatory steps in action as witnessed through the fabric call on the local machine.

benchmarking server from fabric

fabric to benchmark

The Mathematics will run for anywhere between 1 and 15 hours, depending on the raw (single core) processing power on the server being benchmarked.

Fabric availability as standard package

Because Fabric relies on Python 2.5 or newer, you will have to do your own thinking, for CentOS 5 and RedHat 5 (which have python 2.4 by default)

Here is a useful link for Python 2.5 and fabric on rpm based ‘5’ series systems.

Here is a fabric rpm in fedora that assumes a Python version 2.7 or newer.

Security is a mindset, rather than any single action

My opinion. Swill it around a bit … do you agree?

Layers and lack of control:

A new starter is given access to live servers and asked to code up some simple shell scripts, to automate a bit of hosting setup and/or cron tasks.

#!/bin/bash

Not having deep experience with Ubuntu Servers, the new starter immediately dives in and tries to write scripts that use the extended (non posix) features of the bash login shell.

Bash Arrays or let count=count+1 are examples.

So the task progresses and the bashisms are in.

Another task is taken on and completed again using bash extensions similar to those mentioned above.

Now another administrator is asked to add those tasks to a daemon process or startup script.

But the scripts don’t work!

Hacker Solution: Change the default system shell from /bin/dash to /bin/bash

task1, task2, ..., use in daemon/cron

What is highlighted in Red are the human failings.

The creation of a daemon process / cron should not trigger a policy change on a server.

There are several good reasons why Ubuntu switched to dash as the default shell over 5 years ago.

From my personal point of view, bash is too complex, and is better suited to the task of a ‘login shell’ rather than a ‘system shell’ / ‘process shell’

The more complex a piece of software…

  • the more likely a new security hole can be found
  • the more likely a move of versions will result in breakage

The phrase “Keep it simple stupid” feel appropriate here.

Human failings – how could this happen?

The intricacies of posix standard shell scripts are not the first thing a new Linux System Administrator will learn.

Should they be? Well that depends on whether you see certification as important

Pushing your new Linux System Administrator to become Red Hat (RHCE) certified, is one way of introducing them to standards and the context of operational tasks

Here is an unofficial introduction to the sort of thing that RHCE might expect.

How many years into Linux System Administration should the Administrator know about the Single Unix Specification? It all depends on your organisation and how you wish to operate.

In most organisations, the implementation of a new daemon / startup task could never trigger a policy change at the server level.

Some organisations that have no firm security policies and no configuration management documentation, might well fail to stop the actions accompanied by red indicators in my diagram.

This is an organisational / human failing and has nothing to do with the specifics of the technology. It could interchangeably be Linux, or Solaris, or Windows that lacked the human controls to prevent such a change.

Bash is secure – it is rare that there are any new exploits:

Bash changes infrequently (which is good), and most exploits in the last 10 years have been around insecure temporary file creation.

That does not mean that using a non-interactive shell (/bin/dash) is unnecessary, if you need interactive features and feel lost without non-posix bashisms, then do use bash as your login shell.

From experience, the folks who use the more intricate / esoteric features of bash, are usually lacking in knowledge of sed / awk / python.

Using bash to do serious scripting (string processing in particular) is not too different than, using a hammer to knock in a screw. It’ll work, but there are better tools for the job.

Links and Further Reading:

Having recently taken delivery of a new VPS, I logged on and immediately set about, securing the setup.

Changing the root password sounds obvious, but there are alternatives:

  • Blocking root access via ssh.
  • Turning off password authentication and using known keys only.

Doing either of the above might have you thinking that the strong password the datacentre setup for you can now stay – but wait!

VPS, cloud servers, and some dedicated servers are often provisioned from templates.

Those templates may have set the password early in the build process, before secure hashing was fully configured.

 

At a glance – is the root password in /etc/shadow sufficiently hashed?

Here is an example which shares some characteristics with what I found on my newly provisioned server:

root:Npge08pfz4wuk:15000:0:730:7:::

If you are accustomed to working with /etc/passwd and /etc/shadow, you will have spotted the problem already …

The second field is way too short!

It should instead look something like this:

root:$6$OBEzW/iiKRe/ww$vfnfEFg41l1dK4zE4YM9PiRKs7ic5lvg1WgFWgi.VF0O/MYCZPELqedCmSybFQ5.0twYbc1fU6VnXqdACqELj0:15000:0:730:7:::

The second field beginning $6 indicates that the password has been hashed using SHA2 (512), often abbreviated to sha512

If you just want to printout the shadow password entries for those users that can login then use this command:

egrep -v '.*:\*|:\!' /etc/shadow | awk -F: '{print $2}'

and just double-check that all lines output begin $6

If not, then issue a passwd command and input a new password for the problem user (root or otherwise)

Providing your system is not outdated the proper sha512 hashing should be in place. When you regenerate the password with passwd, you should now see the second field in /etc/shadow a lot wider, and no longer a security issue.

 

The third field in /etc/shadow – pre-populated for you during build:

Days since Jan 1, 1970 that password was last changed

That third field will probably be set to 15000 or a number of that magnitude.

Too large a number would see ‘in future’ complaints being registered in a log file in /var/log/

 

Links and Further reading:

The link below gives an example of using mkpasswd to fix things.

However do be aware that echoing passwords to pipe into a secondary command is not considered ideal, due to the process details being visible in plain text to other users who might be running ‘top’

Python can also be used to generate replacement passwords – although again be careful what might be viewable by other users whilst your process is running:

python -c "import crypt, getpass, pwd; print crypt.crypt('passwordunhashed', '\$6\$SALTsalt\$')"