Qcode Software http://www.qcode.co.uk Tcl Web Applications Tue, 01 Oct 2013 14:04:52 +0000 en-US hourly 1 http://wordpress.org/?v=3.8 Task dependencies project management http://www.qcode.co.uk/task-dependencies-project-management/ http://www.qcode.co.uk/task-dependencies-project-management/#comments Wed, 08 May 2013 15:30:26 +0000 http://www.qcode.co.uk/task-dependencies-project-management/index.html I’ve been working on some project management tools recently. The idea is to manage multiple projects which follow the same pattern – the same set of tasks, but re-used.

Data-model wise, I have 3 main tables – “Task”, which represents the tasks shared between all projects, “Job” which represents one particular project-task pairing, and “Task_dependency”, which represents dependencies between tasks.

An example would be building houses – “Construct Roof” might be one task, while “Construct the roof of house number 7″ would be a job matching that task to a single project. Meanwhile, there might be an entry in “Task_dependency” stating that the task “Construct Roof” depends on the task “Build Walls”

So I started working on some of the algorithms needed for common tasks. One of my first concerns was data integrity – using SQL alone it’s hard to impose a restraint like “no task can end up indirectly depending on itself”. The dependencies must form a Directed Acyclic Graph (DAG), with tasks as the “nodes” and dependencies as the “edges”. I decided that the best way to handle this was to have my application check any updates to the task_dependency table for loops, and the easiest way to do that was to start at the task whose dependencies are to be updated, and work backwards along the dependencies to see if we can find our way back to that task. We could work forwards instead, but I couldn’t see any benefits to either direction.

The more difficult part was writing an auto-scheduler. Given a fixed date (the start or finish date), and durations for each task, I needed to plan out a schedule that avoided any dependency conflicts. I spent a while trying to find ways to sort the tasks by dependency, and looking at SQL Common Table Expressions (CTE), and in particular the phrase “WITH RECURSIVE”, before eventually deciding that this wasn’t much help to me. The lack of support for aggregation and non-linear recursion meant that I’d end up trying to find every path from every start node to every node in the graph, and then selecting the longest one for each node. Perhaps unsurprisingly, PostgreSQL wasn’t managing to optimise that.

A few weeks after writing my own algorithm using TCL, I found a wikipedia article on topological sorting, which described almost exactly what I was trying to do, even though I’d never heard the phrase before. Since I was trying to find the longest path at the same time as sorting, my home-made algorithm doesn’t look exactly like either of the ones described, but in principle it’s similar to the breadth-first approach.

Once tasks are sorted topologically, working from a given date was comparatively easy. I’ve ended up working backwards from a chosen finish date (this being what the client required), and scheduling each task as late as possible without causing any conflicts. By iterating over the tasks in reverse topological order, I can ensure that each task has its start and finish date calculated before the tasks it depends on. In addition to this, I’m sorting each task by the longest path from that task node to the finish (as far as possible while maintaining topological order), and by displaying tasks in this this order, tasks which are scheduled later tend to appear further down the screen.

If you’re working forward from a start date, scheduling each task as early as possible, I’d recommend displaying tasks in topological order first, and in order of longest path from start second.

http://www.qcode.co.uk/task-dependencies-project-management/feed/ 0
Tcl Regular Expressions – Greedy or Non-greedy? http://www.qcode.co.uk/tcl-regular-expressions-greedy-or-non-greedy/ http://www.qcode.co.uk/tcl-regular-expressions-greedy-or-non-greedy/#comments Fri, 22 Mar 2013 15:35:11 +0000 http://www.qcode.co.uk/tcl-regular-expressions-greedy-or-non-greedy/index.html I came across an interesting problem while writing a small regular expression which would return a multi-line match from a ssh_config file.

Using the following ssh_config as an example:

Host polecat
HostName polecat.qcode.co.uk
User gerry
IdentityFile ~/.ssh/id_gerry_rsa

Host stoat
HostName stoat.qcode.co.uk
User fergus
IdentityFile ~/.ssh/id_fergus_rsa

 Host ferret
 HostName ferret.qcode.co.uk
 User farquar
 IdentityFile ~/.ssh/id_farquar_rsa

Host weasel
HostName weasel.qcode.co.uk
User crawford
IdentityFile ~/.ssh/id_crawford_rsa

The requirement was to match an entire host clause. So, that would start matching at Host, and end immediately before the next Host clause, but not include that Host string in the match.

Firstly, we have to make sure we do not activate newline sensitive matching (regexp -line) because our match will span multiple lines. This ensures ^ matches only the beginning of the string, and $ matches only the end. Newlines, \n, are just part of the string.

To match the point immediately before the next Host clause, we take advantage of the very useful positive look-ahead constraint (?=re) which will match a point where substring re begins, without matching the substring itself. So to constrain a match to the position before whitespace followed by Host, or the end of the string, we can use (?:(?=\n\s*Host\s)|$) (the ?:‘s are used so this grouping isn’t used as a submatch).

We can step though the evolution of the regular expression, using the -nocase -inline switches to display our match.

The first version to match only the first host clause was the following:


This starts a match at the start of the string, and stops before another Host clause begins. But this will not work since it will attempt to match as much as possible eg.

tclsh8.5 [~]regexp -nocase -inline {^Host\s+polecat\s+.*(?:(?=\n\s*Host\s)|$)} $config
{Host polecat
HostName polecat.qcode.co.uk
User gerry
IdentityFile ~/.ssh/id_gerry_rsa

Host stoat
HostName stoat.qcode.co.uk
User fergus
IdentityFile ~/.ssh/id_fergus_rsa

 Host ferret
 HostName ferret.qcode.co.uk
 User farquar
 IdentityFile ~/.ssh/id_farquar_rsa

Host weasel
HostName weasel.qcode.co.uk
User crawford
IdentityFile ~/.ssh/id_crawford_rsa


We need to change our quantifier to use non-greedy matching to make the match as small as possible. You might think that the following would be sufficient (with the bulk of the match being covered by the quantifier .*?). But no..


tclsh8.5 [~]regexp -nocase -inline {^Host\s+polecat\s+.*?(?:(?=\n\s*Host\s)|$)} $config
{Host polecat
HostName polecat.qcode.co.uk
User gerry
IdentityFile ~/.ssh/id_gerry_rsa

Host stoat
HostName stoat.qcode.co.uk
User fergus
IdentityFile ~/.ssh/id_fergus_rsa

 Host ferret
 HostName ferret.qcode.co.uk
 User farquar
 IdentityFile ~/.ssh/id_farquar_rsa

Host weasel
HostName weasel.qcode.co.uk
User crawford
IdentityFile ~/.ssh/id_crawford_rsa


This is due to how Tcl decides whether greedy or non-greedy matching is used.

If we look at the Tcl reference section on Matching we can see:

A branch has the same preference as the first quantified atom in it which has a preference.

So although it is the content matched by .*? which we wish to change, its matching behaviour is governed by the first quantified atom, which is in Host\s+polecat.
Therefore, to get the behaviour we want, rather counterintuitively, we need to modify the first quantifier:


tclsh8.5 [~]regexp -nocase -inline {^Host\s+?polecat\s+.*(?:(?=\n\s*Host\s)|$)} $config
{Host polecat
HostName polecat.qcode.co.uk
User gerry
IdentityFile ~/.ssh/id_gerry_rsa}

A great way of debugging regular expression is to use the -about switch. In our original example we get the following output:

tclsh8.5 [~]regexp -about -nocase -inline {^Host\s+polecat\s+.*(?:(?=\n\s*Host\s)|$)} $config

This outputs the regular expression’s descriptive flags. In this case we have, REG_ULOOKAHEAD which shows the regular expression contains a lookahead. REG_UNONPOSIX which shows this is not a POSIX regular expression. And finally, REG_ULOCALE, which indicates a dependancy on locale.

One flag NOT present is, REG_USHORTEST. It shows the regular expression is looking for the shortest match.

Checking our second version, we can see the addition of REG_USHORTEST:

tclsh8.5 [~]regexp -about -nocase -inline {^Host\s+?polecat\s+.*(?:(?=\n\s*Host\s)|$)} $config

The final thing to do with our ssh_config regular expression is to allow it to match host clauses which do not appear at the beginning of the config string. This would seem quite straight forward, but there is a small surprise.. we lose non-greedy matching again even though the first quantifier is still specified as non-greedy:


tclsh8.5 [~]regexp -nocase -inline {(?:^|\n|\s)Host\s+?stoat\s+.*(?:(?=\n\s*Host\s)|$)} $config
Host stoat
HostName stoat.qcode.co.uk
User fergus
IdentityFile ~/.ssh/id_fergus_rsa

 Host ferret
 HostName ferret.qcode.co.uk
 User farquar
 IdentityFile ~/.ssh/id_farquar_rsa

Host weasel
HostName weasel.qcode.co.uk
User crawford
IdentityFile ~/.ssh/id_crawford_rsa


tclsh8.5 [~]regexp -about -nocase -inline {(?:^|\n|\s)Host\s+?stoat\s+.*(?:(?=\n\s*Host\s)|$)} $config

This is likely to be due to another Matching rule, namely:

An RE consisting of two or more branches connected by the | operator prefers longest match.

So how to turn this into a non-greedy match? We have no quantifier which we can change to non-greedy matching.

The answer is to ADD a such a quantifier specifically to be able to this. So using the benign quantifier {1,1} which says, allow 1 through to 1 occurances of the previous match, we can apply this to the whole regular expression and turn it into a non-greedy match. No other non-greedy specifier is needed:


tclsh8.5 [~]regexp -about -nocase -inline {(?:(?:^|\n|\s)Host\s+stoat\s+.*(?:(?=\n\s*Host\s)|$)){1,1}?} $config

tclsh8.5 [~]regexp -nocase -inline {(?:(?:^|\n|\s)Host\s+stoat\s+.*(?:(?=\n\s*Host\s)|$)){1,1}?} $config
Host stoat
HostName stoat.qcode.co.uk
User fergus
IdentityFile ~/.ssh/id_fergus_rsa}
http://www.qcode.co.uk/tcl-regular-expressions-greedy-or-non-greedy/feed/ 1
Object-oriented Javascript part 2 http://www.qcode.co.uk/object-oriented-javascript-part-2/ http://www.qcode.co.uk/object-oriented-javascript-part-2/#comments Tue, 19 Mar 2013 10:39:08 +0000 http://www.qcode.co.uk/qcode-debian-package/index.html355 In part 1, I finished off with an example of “class-like” inheritance, but I didn’t get around to explaining it. In this post, I’ll try to couple of key points. Namely, the “heir” function, and closures, created with immediately invoked function expressions.


First of all, let’s look at the “heir” function (newer browsers support Object.create, which with a single argument will do the same thing).

function heir(p) {
 // This function creates a new empty object that inherits from p
 var f = function(){};
 f.prototype = p;
 return new f();

As the comment says, what “heir” does is take a single object “p” and return a new object which inherits from p. What this is generally used for is creating a prototype chain, by having one prototype inherit from another prototype.

Because of how the “new” keyword in Javascript works, objects created with

var myShape = new Shape("blue");

inherit from Shape.prototype, and objects created with

var myRect = new Rectangle("Red", 5, 3);

will inherit from Rectangle.prototype.
After the lines

var superProto = Shape.prototype;
Rectangle.prototype = heir(superProto);

Rectangle.prototype will inherit from Shape.prototype. This means that you can call “myRect.isRed()” and Javascript will follow the prototype inheritance chain until it gets to Shape.prototype.isRed, and call that. Or in more general terms “rectangles inherit all the properties and methods of shapes”.

There is one side-effect to be aware of using “heir”. Normally, prototypes have a “constructor” property referring back to the constructor function for which they were created. So

var myArray = new Array();
return myArray.constructor;

Will give us the constructor function “Array”. However, after calling

Rectangle.prototype = heir(superProto);

Rectangle.prototype.constructor points back to “Shape”, which is not what we want. To correct that, we simply add

Rectangle.prototype.constructor = Rectangle;


The code


Forms an immediately invoked function expression. In other words, we create a function, call it once, and then forget about it. You might be wondering why we would do that, when we could just run the code without all those extra brackets. The reason is to control the scope of our variables. When we write

var superProto = Shape.prototype;

We ensure that superProto is only visible within this anonymous function. If a global variable called superProto exists, our local variable will hide it, and if we want to use another variable called superProto later on, it wont clash with this code.

Most importantly, if we declare any functions within this block of code, they will “remember” that superProto was visible when they were created, and they will still have access to it later. This is because of “closures” – if you create a function while inside another function, the “inner” function will be able to see all the variables that were visible when it was created, and continue being able to see them after the outer function has finished executing.

http://www.qcode.co.uk/object-oriented-javascript-part-2/feed/ 0
Multi-Part MIME Messages http://www.qcode.co.uk/mime-multi-part-messages/ http://www.qcode.co.uk/mime-multi-part-messages/#comments Fri, 15 Mar 2013 12:46:04 +0000 http://www.qcode.co.uk/mime-multi-part-messages/index.html MIME Multi-Part Messages

RFC 733 is one of the oldest and most fundamental RFCs.
It is the origin of the standard used today to construct internet email messages, RFC 5332.
This standard describes email messages consisting of two major sections :-

  • A Header that describes information about the email. Fields such as From, To, CC etc.
  • A Body that is essentially the content of the message to be sent.
  • The header and body must be separated by a null line and can only contain 7-bit ASCII characters.

    So what does this have to do with MIME Multi-Part messages ?

    Restricting the message body to 7-bit ASCII characters limited the majority of messages to languages based on the Latin alphabet.

    Multipurpose Internet Mail Extensions (MIME), RFC 2045 through RFC 2049, are a set of Internet Standards that layer additional formatting standards on the body section of an email.
    To support :-

  • Text in character sets other than ASCII.
  • Non-text attachments.
  • Message bodies with multiple parts.
  • Ok so I know why MIME is used, but how are MIME Multi-Part Messages constructed ?

    MIME Headers

    MIME defines additional header lines that inform the receiving client about how the body should be interpreted.


    If this header is present it indicates that the email body is MIME formatted.
    Eg. “MIME-Version: 1.0″


    This header is used to specify the type of content sent in the email.
    Eg. “Content-Type: text/plain”


    The Content-Disposition was added to instruct the client to either:

  • Automatically display the MIME content “inline” when the message is displayed.
  • Require some form of user action to open the “attachment” MIME content.
  • In addition the Content-Disposition header also provides fields for specifying the name of the file, creation date and modification date.
    Eg. “Content-Disposition:attachment; filename image1.png;”


    Indicates the encoding scheme that should be used by the receiving client to decode the MIME 7-bit ASCII content.
    Eg. “Content-Transfer-Encoding: base64″

    Multi-Part Messages

    MIME Multi-Part messages allow the email body to be divided into multiple distinct parts.
    Each MIME part has its own set of MIME Headers and is enclosed in the MIME boundary specified in the “Content-Type” header.
    A MIME boundary is a string that must be unique and guaranteed not to occur in any of its MIME parts.
    It is positioned before the first, after the last and in between each MIME part.


    There are essentially two main types of email attachments :-

    File attachments are standard attachments normally using “attachment” Content-Disposition.
    They appear as attachments that can be downloaded or viewed by the user.
    Each mime part representing file attachments should be nested inside a Multi-Part Mixed Structure.

    Embedded attachments on the other hand are slightly different in that these attachments are automatically displayed when the user views the email.
    Images for instance can be embedded within a HTML MIME part by referencing the attachment’s Content-ID in an image’s src attribute.
    Each mime part representing embedded attachments should be nested with a Multi-Part Related Structure and have Content-Disposition set to “inline”.


    Email containing multiple MIME parts :-

  • HTML version of the message.
  • Plain text version of the message.
  • 2 base64 encoded file attachments.
  • 2 base64 encoded inline attachments used to embed images in the email.
  • MIME Multi-Part Nesting Structure

    Email Source Code

    From: from@qcode.co.uk
    To: to@@qcode.co.uk
    Subject: Example Email
    MIME-Version: 1.0
    Content-Type: multipart/mixed; boundary="MixedBoundaryString"
    Content-Type: multipart/related; boundary="RelatedBoundaryString"
    Content-Type: multipart/alternative; boundary="AlternativeBoundaryString"
    Content-Type: text/plain;charset="utf-8"
    Content-Transfer-Encoding: quoted-printable
    This is the plain text part of the email.
    Content-Type: text/html;charset="utf-8"
    Content-Transfer-Encoding: quoted-printable
        <img src=3D=22cid:masthead.png=40qcode.co.uk=22 width 800 height=3D80=
        <p>This is the html part of the email.</p>=0D
        <img src=3D=22cid:logo.png=40qcode.co.uk=22 width 200 height=3D60 =5C=
    Content-Type: image/jpgeg;name="logo.png"
    Content-Transfer-Encoding: base64
    Content-Disposition: inline;filename="logo.png"
    Content-ID: <logo.png@qcode.co.uk>
    Content-Type: image/jpgeg;name="masthead.png"
    Content-Transfer-Encoding: base64
    Content-Disposition: inline;filename="masthead.png"
    Content-ID: <masthead.png@qcode.co.uk>
    Content-Type: application/pdf;name="Invoice_1.pdf"
    Content-Transfer-Encoding: base64
    Content-Disposition: attachment;filename="Invoice_1.pdf"
    Content-Type: application/pdf;name="SpecialOffer.pdf"
    Content-Transfer-Encoding: base64
    Content-Disposition: attachment;filename="SpecialOffer.pdf"
    http://www.qcode.co.uk/mime-multi-part-messages/feed/ 0
    PCI DSS Requirement 8: Part 3 – User & Password Policy http://www.qcode.co.uk/pci-dss-requirement-8-part-3-user-password-policy/ http://www.qcode.co.uk/pci-dss-requirement-8-part-3-user-password-policy/#comments Mon, 24 Dec 2012 11:53:21 +0000 http://www.qcode.co.uk/pci-dss-requirement-8-part-3-user-password-policy/index.html

    The remainder of PCI-DSS Section 8 can be split into two parts. Sections 8.5.1 to 8.5.8 describe procedural requirements which you should have in place around user management policy.

    However, section 8.5.9 onwards describe a series of requirements for password strength and user account settings.

    The final two requirements can be handled in their own right.

    8.5.15 If a session has been idle for more than 15 minutes, require the user to re-authenticate to re-activate the terminal or session.

    This is simple to implement with a universal shell timeout. At the the end of our /etc/profile we can insert the following:

    readonly TMOUT
    export TMOUT

    This tells the shell to log out any users who are inactive for longer than 900 secs (15 minutes). In an effort to stop users getting into the habit of bypassing this timeout we can set the environment variable as readonly so it can’t be easily changed (except by a superuser).

    8.5.16 Authenticate all access to any database containing cardholder data. This includes access by applications, administrators, and all other users.
    Restrict user direct access or queries to databases to database administrators.

    Fairly simple to understand this requirement. Not always as simple to implement it depending on how your application is designed. But the intent is clear.

    So we’ll move on to the main bulk of of the post which is user account and password policy.

    Lets review the remaining requirements:

    8.5.9 Change user passwords at least every 90 days.
    8.5.10 Require a minimum password length of at least seven characters.
    8.5.11 Use passwords containing both numeric and alphabetic characters.
    8.5.12 Do not allow an individual to submit a new password that is the same as any of the last four passwords he or she has used.
    8.5.13 Limit repeated access attempts by locking out the user ID after not more than six attempts.
    8.5.14 Set the lockout duration to a minimum of 30 minutes or until administrator enables the user ID.

    pam_cracklib & pam_tally

    The first place many Linux system administrators look to fulfil these requirements is pam_cracklib for password complexity rules (reqs 8.5.10, 8.5.11, 8.5.12), with pam_tally to track failed login attempts (reqs 8.5.13, 8.5.14). And this is indeed a solution.

    However if, like us, you have a fairly large estate of servers to roll this out on then the administrative overhead of maintaining a password policy on each server separately very quickly becomes unreasonable for users who will end up with many potentially unsynchronised passwords for keep track of.

    Centralised User Authentication

    The only real option for us was to centralise user authentication.

    There are different degrees of centralisation depending on what you are trying to achieve. Many organisations go the whole hog and use LDAP as a centralised directory service (taking the job of managing individual systems users away from each server), often in addition to a separate mechanism to provide an authentication which is flexible enough to fulfil PCI-DSS requirements. Kerberos is one such authentication mechanism.

    This LDAP/Kerberos would be a good approach if you have a large and dynamic user base to manage. LDAP allows a system admin to set up and delete users centrally, and Kerberos manages a central authentication method for each user.

    But PCI-DSS is mostly concerned with authentication so will concentrate here on what Kerberos can provide.


    I won’t go into too much detail on how Kerberos works and should be configured since there are many such guides around. But I will show how to use a functioning Kerberos realm to enforce PCI password policy.

    So what we are trying to achieve with Kerberos is:

    1. Centralised Password Policy
    2. and as a bonus we can have, Single Sign-On

    The servers required are the following:

    • Master KDC & Admin Server
      This server is responsible for authenticating requests from Kerberos clients, and issuing Ticket Granting Tickets if the request is successful. The client requests will be generated by libpam-krb5 which will map a Linux system user to a kerberos principle of the same name. If the Kerberos user is successfully authenticated, libpam-krb5 will allow the log in to proceed. This server is also responsible for servicing Kerberos administration requests (password changes, user additions etc.)
    • Slave KDC
      One of the weaknesses of kerberos is that it introduces a single point of failure for sign-ins given the KDC has to be available to allow anyone to log in. To mitigate this we can have a read-only slave KDC which will service log-in requests if the Master server becomes unavailable. (This is done using kprop on the master and kpropd on the slave.)

    Password Policy

    So given a functioning KDC propagating to a read-only slave we can now use it to enforce the PCI password policy.

    This can be done using the kadmin.local command under root on the KDC to add a default policy:

    addpol -minlength 7 -minclasses 3 -maxlife "90 days" -history 4 -lockoutduration "30 mins" -failurecountinterval 0 -maxfailure 6 default

    Client Configuration

    Each user needs to be added as a principle on the KDC AND also exist as a system user on the client itself. But since the system user cannot be used to log in by itself (see pam configuration), it can be set to non-expiring:

    chage --inactive=-1 --maxdays=-1 --expiredate=-1 username

    libpam-krb5 must be installed (along with krb5-config & krb5-user) to act as the sole PAM module servicing authentication requests. Unix authentication MUST be disabled.
    To check which PAM modules are enabled you can run:

    dpkg-reconfigure libpam-runtime

    The /etc/pam.d/common-auth config will look something like the following:

    # here are the per-package modules (the "Primary" block)
    auth	[success=1 default=ignore]	pam_krb5.so minimum_uid=1000
    # here's the fallback if no module succeeds
    auth	requisite			pam_deny.so
    # prime the stack with a positive return value if there isn't one already;
    # this avoids us returning an error just because nothing sets a success code
    # since the modules above will each just jump around
    auth	required			pam_permit.so
    # and here are more per-package modules (the "Additional" block)
    # end of pam-auth-update config

    So now a ssh login will attempt to authenticate against the kerberos master you have configured for this client (the /etc/krb5.conf file should specify both the KDC master AND slave as the krb_servers for the realm to allow the slave to service logins).

    Firewall Configuration

    The use of kerberos means you will need to open some ports within the realm:

    • 749 – kadmin to the master KDC
    • 88,750 – KDC to master and slave KDCs
    • 464 – kpasswd to the master KDC

    Time Synchronisation

    An important thing to know about Kerberos is that it’s very sensitive to differences in time between servers (whenever a server unexpectedly refuses to authenticate you, check the time first!).
    For this reason you really need time synchronisation within the realm… which you should have for PCI anyway. See our NTP Time Synchronisation post for details.

    DNS Reverse Lookups

    Another aspect which Kerberos is very fussy about is that reverse DNS lookups for local IP addresses must match the specified fqdn. This is not always the case in some private networks.

    This ends can be achieved using /etc/hosts or, more practically for larger networks, a local DNS server.

    Single Sign On

    Once you have all your clients using kerberos for authentication you can take advantage of credential delegation so you don’t have to re-authenticate while jumping from client to client within the same kerberos realm.

    This is still PCI compliant as long as you ensure that all remote access to the network in question must under-go two-factor authentication via an SSH gateway server.

    Firstly make sure the KDC is configured to use forwardable tickets in /etc/krb5.conf:

            forwardable = true

    Then, on the clients, configure your ssh client to delegate kerberos credentials in /etc/ssh/ssh_config:

        GSSAPIAuthentication yes
        GSSAPIDelegateCredentials yes

    This means when you ssh to another machine within the realm, your kerberos credentials will be forwarded, and if valid, you will be granted access.


    One bug which can be annoying is when a Kerberos password expires. You will be prompted for a new one. However, if the new password fails to meet the policy requirements, PAM does not display a message to this effect. So to the user it looks like the new password has been accepted, where-as in the log you may find something like the following:

    Jun 27 09:32:02 krbmaster kadmind[27631]: chpw request from for user@QCODE.CO.UK: Password does not contain enough character classes

    This shows in actual fact the password is unchanged. The user, being unaware of this, obviously attempts to log in using their new password and ends up being locked out.

    This is a bug which was introduced in kadmind v1.7 and is fixed in 1.9.1 – so hopefully we should see an end to this in Wheezy which has 1.10.

    http://www.qcode.co.uk/pci-dss-requirement-8-part-3-user-password-policy/feed/ 0
    Object-oriented Javascript http://www.qcode.co.uk/object-oriented-javascript/ http://www.qcode.co.uk/object-oriented-javascript/#comments Fri, 02 Nov 2012 15:36:53 +0000 http://www.qcode.co.uk/object-oriented-javascript/index.html I thought I’d share a few of the key points that helped me understand how objects, classes and inheritance work in javascript, along with an example of “class-like” inheritance.

    A different approach to “new”

    First, where most of the object-oriented languages I’ve come across use classes, javascript uses prototypes instead.

    • In class-based languages, new objects are created by using the keyword “new”, followed by the name of the class and the arguments to be passed in to the constructor function of that class.
    • In javascript, new objects are created using keyword “new”, followed by the constructor function itself, with the arguments to be passed to it.

    Constructor functions in javascript are just regular functions, but commonly if we design a function to be used as an object constructor, we capitalise the first letter of the function name.

    So for example:

    function Rectangle(w, h) {
     this.width = w;
     this.height = h;
     this.area = w * h;
    var myRectangle = new Rectangle(5, 3);
    myRectangle.area; //Returns 15;

    Because we used the “new” keyword, a new object is created, and the function “Rectangle” is called with the keyword “this” referring to the new object. If the constructor function returns something, that’s what the “new” keyword will return, but if it doesn’t then the newly created object will be returned instead.

    Functions are objects, too

    The next thing thing I needed to learn was that in javascript, functions are basically just a special type of object – not only can you “call” them, but you can also pass them by reference, and they have properties and methods of their own.

    Prototypes – what they are

    One of those properties is called “prototype”, and whenever a new function is defined, a new object is created and assigned to the prototype property of that function. The object is almost empty – it has a property called “constructor”, which refers to the function for which it was created, but that’s about it. You can add properties and methods to that object, though, and this comes in handy because of one more thing that the “new” keyword does.

    When a function is called with the “new” keyword, the prototype property of that function defines properties and methods to be inherited by the newly created object.

    So for example:

    function Rectangle(w, h) {
    this.width = w;
    this.height = h;
    Rectangle.prototype.sides = 4;
    Rectangle.prototype.area = function() {
    return this.w * this.h;
    var myRectangle = new Rectange(2, 5);
    myRectangle.width; //Returns 2
    myRectangle.sides; //Returns 4
    myRectangle.area(); //Returns 10

    Prototypes – how they work

    This inheritance is carried out by having new objects store a reference to the prototype object – so if the prototype is modified, all the objects sharing that prototype will be affected. When you attempt to get a property of an object, javascript will first look to see if that property has been set locally for that object, and if it hasn’t, it will attempt to get the property from the object’s prototype instead.

    This can result in javascript following  a “chain” of prototypes, if the object’s prototype has a prototype of it’s own. Each prototype only exist once in memory, though; They are linked together by references.

    Jumping ahead…

    I’d like to finish off (for now) with an example of how I write class-like inheritance in javascript. I’m using an immediately-invoked function expression to create a closure, and a function called “heir”* to handle prototype chaining, and a variable called “superProto” to implement the kind of functionality that languages like Java provide with the “super” keyword:

    function heir(p) {
     // This function creates a new empty object that inherits from p
     var f = function(){};
     f.prototype = p;
     return new f();
    // Class "Shape"
    Shape = function(color) {
     this.color = color;
    Shape.prototype.draw = function(){
     alert('Cannot draw a shape...');
    Shape.prototype.whatAmI = function(){
     alert("I'm a shape");
    Shape.prototype.isRed = function(){
     return this.color === "Red";
    // Class "Rectangle"
    var Rectangle;
     var superProto = Shape.prototype;
     Rectangle = function(color, width, height){ // The constructor function
      superProto.constructor.call(this, color);
      this.width = width;
      this.height = height;
     Rectangle.prototype = heir(superProto); // These 2 lines handle the inheritance
     Rectangle.prototype.constructor = Rectangle;
     Rectangle.prototype.draw = function(){ // Other methods of class "Rectangle"
      alert('...not even a rectangle...');
     Rectangle.prototype.whatAmI = function(){
      alert("I'm a rectangle");
     Rectangle.prototype.area = function(){
      return this.width * this.height;
    // Some testing
    var rect = new Rectangle("Red", 5, 3);
    rect.isRed(); //Returns true
    rect.area(); //Returns 15
    rect.draw(); //Alerts "Cannot draw a shape..." and "...not even a rectangle..."
    rect.whatAmI(); //Alerts "I'm a rectangle"

    jsfiddle of example

    *(Since writing this post, I have learnt that newer browsers may support “Object.create”, which provides the same functionality as “heir”, and more besides.)

    http://www.qcode.co.uk/object-oriented-javascript/feed/ 0
    Database Schema Changes (Version 2.0) http://www.qcode.co.uk/database-schema-changes-version-2-0/ http://www.qcode.co.uk/database-schema-changes-version-2-0/#comments Fri, 28 Sep 2012 15:49:52 +0000 http://www.qcode.co.uk/database-schema-changes-version-2-0/index.html Back in June I wrote a post explaining how our developers keep their development environments up to date with Database Schema Changes.
    Since then we have made some improvements to automate the process which I’d like to share with you.

    Heres how it works…

    In the database we create a table to keep track of the current schema version.

    create table schema (
        version int primary key
    insert into schema (version) values (1);

    Any database schema changes are then recorded in a version controlled file called changes.tcl
    File: changes.tcl

    schema_update 1 {
      db_dml {alter table customer add column firstname varchar(100) not null}
    schema_update 2 {
      db_dml {alter table customer add column lastname varchar(100) not null}

    Schema updates can now be manually applied by copy and pasting schema update statements directly into the your servers control port.
    Although we prefer to automatically source changes.tcl every time our server is restarted.


    To accomplish this we have implemented a simple TCL proc that checks whether a schema update script should be applied to the current schema version.
    Here’s the source code for our schema_update TCL proc, you can also find it in our Qcode TCL Library.

    If the first argument matches the current schema version then the script in the second argument is executed and the current schema version incremented.

    http://www.qcode.co.uk/database-schema-changes-version-2-0/feed/ 0
    PCI DSS Requirement 8: Part 2 – Stunnel & Plain Text Passwords http://www.qcode.co.uk/pci-dss-requirement-8-part-2-stunnel-plain-text-passwords/ http://www.qcode.co.uk/pci-dss-requirement-8-part-2-stunnel-plain-text-passwords/#comments Fri, 28 Sep 2012 15:21:16 +0000 http://www.qcode.co.uk/pci-dss-requirement-8-part-2-stunnel-plain-text-passwords/index.html
    This requirement may be very simple to comply with if the only passwords we have to worry about were SSH passwords. SSH and linux does exactly what is asked for, namely:

    8.4 Render all passwords unreadable during transmission and storage on all system components using strong cryptography.

    However, often life is not as simple.

    Sometimes legacy applications which are essential to an environment’s operation don’t support the use of SSL and attempt to authenticate by sending plain text passwords.

    How can we comply with this PCI requirement in these cases?

    As always there are a few options.

    Lets assume we have an application server (APP_SRV1) which connects to a database server (DB_SRV1) listening on port 5432, but it does this by sending a clear text password for the database to DB_SRV1:5432.

    SSH Tunnelling

    We could create an SSH tunnel which the application would use to communicate with the database.

    The rough process on APP_SRV1 server would be as follows.

    SSH key exchange:

    ssh-keygen -t rsa
    ssh-copy-id db_user@DB_SRV1

    Then create a tunnel:

    ssh -L 5555:localhost:5432 db_user@DB_SRV1

    (you must ensure the DB server would be listening on localhost:5432).

    The command is not immediately transparent as to what it’s doing. This says, create a local port listening on 5555, and use SSH to tunnel to DB_SRV1. Once there, pass the traffic to localhost:5432.
    The traffic actually uses the standard SSH port of 22 between the two servers which you need to know for firewall purposes.

    So APP_SRV1 would send its request to localhost:5555 instead of DB_SRV1:5432. Then on DB_SRV1, the request would emerge from the SSH tunnel and be passed to localhost:5432 where the database is listening.

    Since this would need to be a persistent connection, something like authssh would need to use used to monitor and maintain the tunnel in a matter similar to the following:

    autossh -M 0 -f -q -N  -o "ServerAliveInterval 10" -o "ServerAliveCountMax 3" -L 5555:localhost:5432 db_user@DB_SRV1


    IPsec could also be used. The encrypted connection is designed to be persistent but it would encrypt all communication between 2 specific hosts which may be overkill depending on your configuration.


    We found a good compromise between persistence and targeted encryption was stunnel.

    This is quite similar in concept to SSH tunnelling but packaged in a more convenient way.

    Stunnel Installation

    Stunnel will need to be installed on both the application server and the database server.

    Installation is via the debian package:

    apt-get install stunnel4

    Stunnel Server Configuration

    First we will configure the server side of connection which will listen on DB_SRV1. Edit /etc/stunnel/stunnel.conf so it reads:

    ; Some debugging stuff useful for troubleshooting
    ;debug = 7
    ;foreground = yes
    ;output = /var/log/stunnel4/stunnel.log
    pid = /var/run/stunnel4.pid
    cert = /etc/ssl/certs/stunnel.cert
    key = /etc/ssl/private/stunnel.key
    options = NO_SSLv2
    accept = DB_SRV1:5433
    connect = localhost:5432

    This will listen on port 5433 on DB_SRV1 and pass traffic onto the DB which is listening locally on localhost:5432. The options and ciphers lines ensure we do not use weak ciphers for encryption (“Render all passwords unreadable during transmission [..] using strong cryptography“).

    And to create the required certificate and key:

    [david@debian:~]$ sudo openssl genrsa -out  /etc/ssl/private/stunnel.key 2048
    Generating RSA private key, 2048 bit long modulus
    e is 65537 (0x10001)
    [david@debian:~]$ sudo openssl req -new -x509 -key /etc/ssl/private/stunnel.key -out /etc/ssl/certs/stunnel.cert -days 365
    You are about to be asked to enter information that will be incorporated
    into your certificate request.
    [david@debian:~]$ chmod 400 /etc/ssl/private/stunnel.key
    [david@debian:~]$ chmod 400 /etc/ssl/certs/stunnel.cert

    We want this connection to restart if the server restarts so in /etc/default/stunnel4
    ensure the following is present:


    Now we can restart stunnel and the connection will start listening:

    /etc/init.d/stunnel4 restart

    Stunnel Client Configuration

    And for the client side of the connection which will run on APP_SRV1 we use the same process except using the following configuration in /etc/stunnel4/stunnel.conf:

    ; Some debugging stuff useful for troubleshooting
    ;debug = 7
    ;foreground = yes
    ;output = /var/log/stunnel4/stunnel.log
    pid = /var/run/stunnel4.pid
    cert = /etc/ssl/certs/stunnel.cert
    key = /etc/ssl/private/stunnel.key
    client = yes
    accept = localhost:5432
    connect = DB_SRV1:5433

    This configuration says stunnel will listen locally on localhost:5432 and will connect to DB_SRV1:5433 as a client to pass on any connection it receives. DB_SRV1:5433 of course is the server side of the stunnel connection and everything sent over that connection will be encrypted.

    Once you have configured the keys, /etc/default/stunnel4, and restarted, you configure your application to speak to localhost:5432 instead of DB_SRV1:5432, and you now have encrypted communication between the two points.

    Although the configuration effort appears greater for stunnel than for SSH tunnels, once up and running, we found stunnel to be very robust and reliable.

    http://www.qcode.co.uk/pci-dss-requirement-8-part-2-stunnel-plain-text-passwords/feed/ 0
    PCI DSS Requirement 8: Part 1 – Two-factor Authentication http://www.qcode.co.uk/pci-dss-requirement-8-part-1-two-factor-authentication/ http://www.qcode.co.uk/pci-dss-requirement-8-part-1-two-factor-authentication/#comments Thu, 27 Sep 2012 15:43:35 +0000 http://www.qcode.co.uk/pci-dss-requirement-8-part-1-two-factor-authentication/index.html

    Requirement 8: Assign a unique ID to each person with computer access

    8.1 Assign all users a unique ID before allowing them to access system components or cardholder data.

    In our post about Rootsh logging we touched on the PCI DSS Requirement that all users must use a unique user id and never use shared accounts. This is where that requirement is made more explicit.

    In the linux world the most common shared account you are likely to come across as a system administrator is the root account, and we covered how we can safely disable this once sudo is properly configured.

    But even if we have unique accounts on every system which is in, or connected to, the cardholder data environment, we still have more work to do.

    8.2 In addition to assigning a unique ID, employ at least one of the following methods to authenticate all users:
    – Something you know, such as a password or passphrase
    – Something you have, such as a token device or smart card
    – Something you are, such as a biometric

    8.3 Incorporate two-factor authentication for remote access (network-level access originating from outside the network) to the network by employees, administrators, and third parties. (For example, remote authentication and dial-in service (RADIUS) with tokens; terminal access controller access control system (TACACS) with tokens; or other technologies that facilitate two-factor authentication.)
    Note: Two-factor authentication requires that two of the three authentication methods (see Requirement 8.2 for descriptions of authentication methods) be used for authentication. Using one factor twice (for example, using two separate passwords) is not considered two-factor authentication.

    So before any user can gain access to the cardholder data environment, they must undergo a two factor authentication. There are many commercial solutions for this which involve distribution of tokens or the use of biometrics, but we were looking for an open-source route which did not involve additional hardware.

    One Time Passwords

    We need 2 out of the 3 methods listed in Requirement 8.2 to have true two-factor authentication.

    The first method, “something you know”, is the users’ standard system password (for their unique user id). More on the requirements for this in a later post.

    An accepted second method is a one-time password. You’ll note Requirement 8.3 above mentions two separate passwords are not sufficient, so how can we use a one-time passwords for the second authentication method?

    This is because one time passwords are in effect “something you have” and not “something you know”. You need to be in possession of a means of password generation whether that be a token, a mobile phone to receive it as an SMS, or even a paper list. People don’t, and generally can’t, memorise a one-time password for each login.

    S/Key System

    The S/Key system is a means of generating one-time passwords based on a random seed, a secret key, and a sequence number. A user can use an offline calculator to generate these passwords as needed.

    S/Key takes an initial secret (which ideally should never be typed into anything except an offline device), and along with a random seed, applies a one-way hash function to this secret n times. Lets say n was 500.

    The server then stores the 500th hash result. There is no way of moving backwards through the hash chain to expose the initial secret so this is perfectly safe.

    Upon login, the user is required to supply the 499th hash result. The server can easily check if it is given the correct answer by applying the hash function to the 499th hash, to see if it equals the 500th hash result. If it does, the 499th hash result is stored and the user is authenticated.

    Next login will require the 498th hash result and so on. And because we cannot move backwards through the hash chain, the user must use an S/Key calculator to produce the desired result by supplying the secret, and applying the hash the required number of times to it.

    It may sound complicated, but it is very easy in practice, so lets look at an implementation.

    SSH Gateways & Firewall Configuration

    Before we move onto installation of a one-time password system, we should mention the network design. Two factor authentication is required for all “remote access”. Unless you have few enough machines in your environment that you’d be happy to install one-time password implementations on all of them, we’d highly recommend designating two or more SSH gateway machines which must be remotely logged into first before any of the rest of your environment can be accessed.

    The firewall should be configured to only allow inbound traffic on port 22 to your SSH gateways, and all SSH gateways would have one time passwords enforced in addition to standard system level user passwords. So to remotely access any machine in your environment, you are forced to undergo two factor authentication. Once on a gateway machine, you can SSH internally to other machines in the environment using user passwords only.

    OPIE Installation

    OPIE stands for One-Time Password in Everything and is an implementation which is packaged for Debian.

    On a designated SSH gateway server, do the following to install:

    apt-get install libpam-opie opie-server

    Then you need to configure SSH so that it requires authentication from the OPIE PAM module as well as the current method of authentication.

    We found we needed the following changes.

    In /etc/ssh/sshd_config we needed:

    ChallengeResponseAuthentication yes
    PasswordAuthentication yes
    UsePAM yes

    Then in /etc/pam.d/sshd, after the @include common-auth line, add:

    auth            required        pam_opie.so

    (we preferred to have this line outside of /etc/pam.d/common-auth so that when installing or configuring other PAM modules, the OPIE requirement would be left alone).

    This means, once the standard common-auth authentication has been successful, pam_opie.so is then called… these are our two factors of authentication.

    So to activate this (but don’t do it yet!) we would:

    /etc/init.d/ssh restart

    Setting up OPIE Users

    Of course, if we activated OPIE now, no-one would be able to log in since we first need to initialise the users.

    For maximum security, the user should have an offline S/Key calculator ready for use at this point. There are many implementations around for iOS or Android etc. so choose whichever you find easiest to use.

    A user would log into the SSH gateway server and run:


    opiepasswd never asks for the initial secret. This is important since if it was sniffed then all the S/Key passwords generated would be compromised. Only the offline S/Key calculator needs the initial secret.

    opiepasswd by default asks for a response from your S/Key calculator for a the 499th password and supplies the random(ish) salt to use:

    [david@debian:~]$ opiepasswd
    Adding david:
    You need the response from an OTP generator.
    New secret pass phrase:
    	otp-md5 499 de6448

    Now on your calculator you supply the challenge 499 de6448, a secret, and select md5 as your hash method. You will then be provided with a response similar to the following to enter:


    The server is now initialised for this user. You can see inside /etc/opiekeys that this password has been recorded (in hex) as the 499th for user david:

    david 0499 de6448           f32f743a123c9bac  Sep 27,2012 12:36:38

    Once OPIE is activated, on the next login, libpam-opie will challenge me with:

    $ ssh david@ssh_gate1
    otp-md5 498 de6448 ext, Response:

    And again, the calculator is used to provide the response.

    Low Tech OPIE

    If the user doesn’t have, or prefers not to use, a mobile device capable to running a S/Key calculator, two-factor authentication can still be achieved.

    One way would be to use a separate, local linux machine to generate a list of passwords.
    The user can install opie-server:

    apt-get install opie-server

    then generate a list of one-time passwords based on a secret key and chosen sequence number and salt (increase -n for more):

    [david@debian:~]$ opiekey -n 20 499 lo123
    Using the MD5 algorithm to compute response.
    Reminder: Don't use opiekey from telnet or dial-in sessions.
    Enter secret pass phrase: 

    This old-fashioned user can then print this list off and keep it with them for future reference when needing to log in. Very low tech indeed, but it’s really not all that different to having a token.

    Then on the SSH gateway, to set up the user for the first time, we would run opiepasswd as before except we specify the sequence 499, and salt lo123, which the user used on their local machine – note the response is the same as number 499 on the list:

    [david@debian:~]$ opiepasswd -s lo123 -n 499
    Adding david:
    You need the response from an OTP generator.
    New secret pass phrase:
    	otp-md5 499 lo123
    ID david OTP key is 499 lo123

    So the user is set up with the last key from the list and will work their way down it every time they log in.

    http://www.qcode.co.uk/pci-dss-requirement-8-part-1-two-factor-authentication/feed/ 0
    PCI DSS Requirement 10: Part 4 – Log File Monitoring (and more) with OSSEC http://www.qcode.co.uk/pci-dss-requirement-10-part-4-log-file-monitoring-and-more-with-ossec/ http://www.qcode.co.uk/pci-dss-requirement-10-part-4-log-file-monitoring-and-more-with-ossec/#comments Wed, 26 Sep 2012 15:24:47 +0000 http://www.qcode.co.uk/pci-dss-requirement-10-part-4-log-file-monitoring-and-more-with-ossec/index.html

    Next we are going to look at log file monitoring. Here’s what section 10.6 has to say about it:

    10.6 Review logs for all system components at least daily. Log reviews must include those servers that perform security functions like intrusion-detection system (IDS) and authentication, authorization, and accounting protocol (AAA) servers (for example, RADIUS).

    Note: Log harvesting, parsing, and alerting tools may be used to meet compliance with Requirement 10.6.

    We also have a requirement from 10.5 we still need to address which is:

    10.5.5 Use file-integrity monitoring or change-detection software on logs to ensure that existing log data cannot be changed without generating alerts (although new data being added should not cause an alert).

    And, to jump ahead, we have this:

    11.5 Deploy file-integrity monitoring tools to alert personnel to unauthorized modification of critical system files, configuration files, or content files; and configure the software to perform critical file comparisons at least weekly.

    Examples of files that should be monitored:
    - System executables
    - Application executables
    - Configuration and parameter files
    - Centrally stored, historical or archived, log and audit files

    These requirements are all things that OSSEC can help us with. OSSEC is an open source host-based intrusion detection system.

    It has many features, but what we are mainly interested in here is:

    • Log Monitoring
    • File Integrity Checking

    Logfile Monitoring

    OSSEC works by having a Manager and Agents. The Agents forward information to the Manager which applies a set of rules to govern whether alerts need to be raised or not. By default OSSEC monitors:

    • /var/log/messages
    • /var/log/auth.log
    • /var/log/syslog
    • /var/log/mail.info
    • /var/log/dpkg.log
    • /var/log/apache2/error.log
    • /var/log/apache2/access.log

    And it has a very good set of rules to cover many events we would like to hear about eg. packages being installed, authentication errors, SQL errors etc.
    See later for how to add extra log files to be monitored.

    Manager Installation

    OSSEC has no Debian package available as standard and has a slightly non-standard installation procedure (from a Debian point of view) which compiles from source via an interactive installation script. It’s easy enough to perform but makes you jump through a few extra hoops if you want to automate the task.
    If you’d like to avoid compiling from source on each agent, there is an option do a binary installation of pre-compiled binaries. We found we had too many different architectures for this to be a significant gain however.

    Download and unpack the tarball:

    cd /usr/local/src
    wget http://www.ossec.net/files/ossec-hids-2.6.tar.gz
    tar -zxvf ossec-hids-2.6.tar.gz
    cd ossec-hids-*

    Install some pre-reqs:

    apt-get install openssl make gcc libssl-dev dpkg-dev libc-dev libc6-dev linux-libc-dev zlib1g-dev

    Rather that use the interactive installation routine (which is ./install.sh if you are happily to install manually), we preferred to distribute a pre-populated /usr/local/src/ossec-hids-2.6/etc/preloaded-vars.conf which would automate the task.

    For the server install our /usr/local/src/ossec-hids-2.6/etc/preloaded-vars.conf looked like this:

    USER_LANGUAGE="en"     # For english
    ### Server/Local Installation variables. ###
    # USER_ENABLE_EMAIL enables or disables email alerting.
    # USER_EMAIL_ADDRESS defines the destination e-mail of the alerts.
    # USER_EMAIL_SMTP defines the SMTP server to send the e-mails.
    # USER_ENABLE_SYSLOG enables or disables remote syslog.
    # USER_ENABLE_FIREWALL_RESPONSE enables or disables
    # the firewall response.
    #### exit ? ###

    You won’t be affected by this now, but during Debian upgrades while having OSSEC installed, you might come across a missing LSB tags and overrides error for OSSEC which breaks the upgrade (it happened to us during the Lenny to Squeeze upgrade). To avoid this we can simply add LSB headers to the existing init script in the source directory.

    Replace /usr/local/src/ossec-hids-2.6/src/init/ossec-hids.init with:

    # OSSEC         Controls OSSEC HIDS
    # Author:       Daniel B. Cid 
    # Modified for slackware by Jack S. Lai
    # Provides:          ossec
    # Required-Start:    $network $named $remote_fs $syslog
    # Required-Stop:     $network $named $remote_fs $syslog
    # Default-Start:     2 3 4 5
    # Default-Stop:      0 1 6
    ### END INIT INFO                                                                                   
    . /etc/ossec-init.conf
    if [ "X${DIRECTORY}" = "X" ]; then
    start() {
            ${DIRECTORY}/bin/ossec-control start
    stop() {
            ${DIRECTORY}/bin/ossec-control stop
    status() {
            ${DIRECTORY}/bin/ossec-control status
    case "$1" in
            echo "*** Usage: $0 {start|stop|restart|status}"
            exit 1
    exit 0

    Now if we run the installation script, the installation should complete without prompting:


    The commands to stop and start the server are:

    /var/ossec/bin/ossec-control stop
    /var/ossec/bin/ossec-control start

    Once installed, we like to tidy up the packages required for the installation (see PCI Requirement 2.2.4 – “Remove all unnecessary functionality, such as scripts, drivers, features, subsystems, file systems, and unnecessary web servers”)

    apt-get remove zlib1g-dev patch make  g++ gcc gcc-4.4 g++-4.4 fakeroot dpkg-dev cpp cpp-4.4 bzip2 build-essential binutils


    An issue when performing unattended installs of OSSEC agents is that the manage_agents command must be run to import a key generated by the server for the client. This is a manual process so when installing OSSEC on many servers we need another way.

    That other way is authd. Is it a service which runs on the manager and automatically provides keys to agents which connect on the port authd is listening on. It does not do any authentication so never leave the authd daemon running when you are not actively installing agents.

    So at this point we need to generate the SSL keys authd requires before we start it for the first time.

    openssl genrsa -out /var/ossec/etc/sslmanager.key 2048
    openssl req -new -x509 -key /var/ossec/etc/sslmanager.key -out /var/ossec/etc/sslmanager.cert -days 365

    To start authd the following command is run:

    /var/ossec/bin/ossec-authd -p 1515

    Remember to stop it again when the installations are finished.

    Firewall Configuration

    Since the agents need to be able to connect to the OSSEC manager during install, you need to allow communication from the agents to the manager on 1514 (the standard OSSEC port) and 1515 (the port used for authd).
    Port 1515 should only be open when you are actively installing agents.

    Agent Installation

    This is exactly the same as the manager install except that the /usr/local/src/ossec-hids-2.6/etc/preloaded-vars.conf file used to automate the install is different, and we need to connect to the authd server daemon at the end to to be allocated our key.

    Note you must replace with the IP address of your OSSEC manager.

    USER_LANGUAGE="en"     # For english
    ### Agent Installation variables. ###
    #### exit ? ###

    Once the install is complete, the final step is to be allocated a key. Using authd this is done by:

    /var/ossec/bin/agent-auth -m -p 1515

    Adding Alerts

    Out of the box, OSSEC will alert and a great many very useful events. But you will almost certainly want to create additional alerts.

    As an example, say we wanted OSSEC to generate an alert when the following log entry is found:

    Sep 18 10:26:01 client1 stunnel: LOG3[15636:140069884897024]: SSL_accept: Peer suddenly disconnected

    We can test this using a useful binary supplied called ossec-logtest. On your manager machine, run ossec-logtest and paste in the log entry in question to see what will currently happen:

    [david@ossecmgr:/var/ossec/bin]# sudo ./ossec-logtest
    2012/09/26 13:06:39 ossec-testrule: INFO: Reading local decoder file.
    2012/09/26 13:06:40 ossec-testrule: INFO: Started (pid: 29628).
    ossec-testrule: Type one log per line.
    Sep 18 10:26:01 client1 stunnel: LOG3[15636:140069884897024]: SSL_accept: Peer suddenly disconnected
    **Phase 1: Completed pre-decoding.
           full event: 'Sep 18 10:26:01 client1 stunnel: LOG3[15636:140069884897024]: SSL_accept: Peer suddenly disconnected'
           hostname: 'client1'
           program_name: 'stunnel'
           log: 'LOG3[15636:140069884897024]: SSL_accept: Peer suddenly disconnected'
    **Phase 2: Completed decoding.
           No decoder matched.

    So as we already knew, no alert is generated.
    The file /var/ossec/rules/local_rules.xml on the OSSEC manager machine is where we can add rules to trigger new alerts. (During upgrades, this rules file is left alone, so if you don’t want to lose your customer rules, keep them here). We won’t go into all the ins and outs of rule creation since there plenty of documentation on the OSSEC website.

    We can see it has parsed the program_name, hostname, and log entries. So at the most basic level we can use these to raise our alert. Local rules should use a rule id from 100,000 to 119,999 to ensure they won’t interfere with existing rules.

    The following rule is the fairly self-explanatory addition to /var/ossec/rules/local_rules.xml:

     <rule id="100100" level="10">
       <match>SSL_accept: Peer suddenly disconnected</match>
       <description>Stunnel peer disconnect</description>

    So we can try ossec-logtest again with the new rule:

    [david@ossecmgr:/var/ossec/bin]# ./ossec-logtest 
    2012/09/26 13:24:58 ossec-testrule: INFO: Reading local decoder file.
    2012/09/26 13:24:59 ossec-testrule: INFO: Started (pid: 29670).
    ossec-testrule: Type one log per line.
    Sep 18 10:26:01 client1 stunnel: LOG3[15636:140069884897024]: SSL_accept: Peer suddenly disconnected
    **Phase 1: Completed pre-decoding.
           full event: 'Sep 18 10:26:01 client1 stunnel: LOG3[15636:140069884897024]: SSL_accept: Peer suddenly disconnected'
           hostname: 'client1'
           program_name: 'stunnel'
           log: 'LOG3[15636:140069884897024]: SSL_accept: Peer suddenly disconnected'
    **Phase 2: Completed decoding.
           No decoder matched.
    **Phase 3: Completed filtering (rules).
           Rule id: '100100'
           Level: '10'
           Description: 'Stunnel peer disconnect'
    **Alert to be generated.


    Ignoring Alerts

    Another problem you are likely to have is receiving alerts which you feel are unnecessary.

    One such alert for us was the mail server rejecting emails because of greylisting. This is normal operation as far as we are concerned so we didn’t want to hear about it.

    The log file entry in question is:

    Apr  6 12:42:51 mailsrv1 postfix/smtpd[20508]: NOQUEUE: reject: RCPT from unknown[]: 450 4.2.0 : Recipient address rejected: Greylisted, see http://postgrey.schweikert.ch/help/domain.co.uk.html; from= to= proto=ESMTP helo=

    Testing this in ossec-logtest we can see where the alert is coming from:

    [david@ossecmgr1:/var/ossec/bin]# ./ossec-logtest 
    2012/09/26 13:55:46 ossec-testrule: INFO: Reading local decoder file.
    2012/09/26 13:55:46 ossec-testrule: INFO: Started (pid: 29698).
    ossec-testrule: Type one log per line.
    Apr  6 12:42:51 mailsrv1 postfix/smtpd[20508]: NOQUEUE: reject: RCPT from unknown[]: 450 4.2.0 : Recipient address rejected: Greylisted, see http://postgrey.schweikert.ch/help/domain.co.uk.html; from= to= proto=ESMTP helo=
    **Phase 1: Completed pre-decoding.
           full event: 'Apr  6 12:42:51 mailsrv1 postfix/smtpd[20508]: NOQUEUE: reject: RCPT from unknown[]: 450 4.2.0 : Recipient address rejected: Greylisted, see http://postgrey.schweikert.ch/help/domain.co.uk.html; from= to= proto=ESMTP helo='
           hostname: 'mailsrv1'
           program_name: 'postfix/smtpd'
           log: 'NOQUEUE: reject: RCPT from unknown[]: 450 4.2.0 : Recipient address rejected: Greylisted, see http://postgrey.schweikert.ch/help/domain.co.uk.html; from= to= proto=ESMTP helo='
    **Phase 2: Completed decoding.
           decoder: 'postfix'
           srcip: ''
           id: '450'
    **Phase 3: Completed filtering (rules).
           Rule id: '3303'
           Level: '5'
           Description: 'Sender domain is not found (450: Requested mail action not taken).'
    **Alert to be generated.

    Now we know the alert is coming from rule id 3303, we can override this in /var/ossec/rules/local_rules.xml:

     <rule id="100102" level="0">
       <description>Greylisting ignored</description>

    This says the following:

    • level="0" – don’t alert upon matching
    • if_sid – matches if the specified rule id has matched
    • match – matches if the specified string is found in the log entry

    So testing once more with this extra rule:

    [david@ossecmgr1:/var/ossec/bin]# ./ossec-logtest 
    2012/09/26 14:05:58 ossec-testrule: INFO: Reading local decoder file.
    2012/09/26 14:05:59 ossec-testrule: INFO: Started (pid: 29709).
    ossec-testrule: Type one log per line.
    Apr  6 12:42:51 mailsrv1 postfix/smtpd[20508]: NOQUEUE: reject: RCPT from unknown[]: 450 4.2.0 : Recipient address rejected: Greylisted, see http://postgrey.schweikert.ch/help/domain.co.uk.html; from= to= proto=ESMTP helo=
    **Phase 1: Completed pre-decoding.
           full event: 'Apr  6 12:42:51 mailsrv1 postfix/smtpd[20508]: NOQUEUE: reject: RCPT from unknown[]: 450 4.2.0 : Recipient address rejected: Greylisted, see http://postgrey.schweikert.ch/help/domain.co.uk.html; from= to= proto=ESMTP helo='
           hostname: 'mailsrv1'
           program_name: 'postfix/smtpd'
           log: 'NOQUEUE: reject: RCPT from unknown[]: 450 4.2.0 : Recipient address rejected: Greylisted, see http://postgrey.schweikert.ch/help/domain.co.uk.html; from= to= proto=ESMTP helo='
    **Phase 2: Completed decoding.
           decoder: 'postfix'
           srcip: ''
           id: '450'
    **Phase 3: Completed filtering (rules).
           Rule id: '100101'
           Level: '0'
           Description: 'Greylisting ignored'

    So we shouldn’t hear about these any more.

    File Integrity Checking

    Now you will be receiving regular alerts on log file entries, we will focus on the other use of OSSEC which helps us in PCI. Namely:

    11.5 Deploy file-integrity monitoring tools to alert personnel to unauthorized modification of critical system files, configuration files, or content files; and configure the software to perform critical file comparisons at least weekly.

    Examples of files that should be monitored:
    - System executables
    - Application executables
    - Configuration and parameter files
    - Centrally stored, historical or archived, log and audit files

    OSSEC provides us with syscheck which allows us to preform file integrity checking.

    By default, OSSEC will scan everything in the following directories every 22 hours and raise an alert if anything about the files changes (eg. checksum, owner, permissions etc.):

    • /etc
    • /usr/bin
    • /usr/sbin
    • /bin
    • /sbin

    These are configured in each agent’s /var/ossec/etc/ossec.conf
    So as a basic level we will hear about configuration files and executables being changed. But we need to carefully configure each agent so “critical system files, configuration files, or content files” are all included in the syscheck.

    For example, on a webserver with a public Javascript directory, you will want to alert on changes to this also.

    Do add this, on the webserver in question, edit /var/ossec/etc/ossec.conf and add to the syscheck section:

        <directories check_all="yes">/var/www/Javascript</directories>

    Again, using unattended installation methods, we preferred to avoid having to have each agent be configured with a different ossec.conf, and OSSEC have provided a means to standardise the configuration on all agents.

    Shared agent.conf

    On the OSSEC manager you can create a file called /var/ossec/etc/shared/agent.conf in which you can specify syscheck entries for specific agents. This file will then be automatically distributed to all agents, and if an agent finds a configuration entry with it’s name on it, it will load it.

    We had some issues with the OSSEC’s automatic file distribution, so if you do also, feel free to distribute the file yourselves to each agent. It will have the same effect.

    Adding Log Files for Monitoring

    It so happens, /var/ossec/etc/shared/agent.conf is where we can add extra log files to be parsed for log file monitoring.

    To add /var/log/app1/app1.log on APP_SRV1 we simply add the following to /var/ossec/etc/shared/agent.conf:

    <agent_config name="APP_SRV1">

    Log File Integrity

    So to the final part:

    10.5.5 Use file-integrity monitoring or change-detection software on logs to ensure that existing log data cannot be changed without generating alerts (although new data being added should not cause an alert).

    Referring back to out post about PCI DSS Centralised Logging, we can use /var/ossec/etc/shared/agent.conf to tell our log server to monitor the audit logs in /var/log/hosts for changes. Bear in mind, we cannot monitor everything in there because logs, being logs, are changing constantly. We have to accept that and only monitor the rotated files.

    So putting this together, we can create a /var/ossec/etc/shared/agent.conf which will monitor log file integrity on the log server, javascript integrity on the webserver, and parse app1.log on the app server:

    <agent_config name="WEB_SRV1">
          <directories check_all="yes">/var/www/Javascript</directories>
    <agent_config name="LOG_SRV1">
          <directories check_all="yes">/var/log/hosts</directories>
          <ignore type="sregex" >.log$</ignore>
    <agent_config name="APP_SRV1">

    You can see how we’ve excluded all the files ending .log with a sregex match since they will constantly be written to.

    The finished file will of course be a lot larger to cover your full file integrity needs but this is the basic structure you can use to achieve your aims.

    http://www.qcode.co.uk/pci-dss-requirement-10-part-4-log-file-monitoring-and-more-with-ossec/feed/ 2