What do you think this is?

just thoughts of a restless mind...

Cyber Security for Critical Infrastructure 4.0

Cyber Security for Critical Infrastructure 4.0

On 26th and 27th of March I was invited to participate in the Cyber Security for Critical Infrastructure 4.0 conference organized by Cyber Senate in Amsterdam. It was a very nice conference, organized brilliantly by Alex Matthews and James Nesbitt. Chris Blask was in charge of the coordination of the conference, and we all enjoyed a nice flow of the talks, panels and breaks.

One thing that was very clear to me from the beginning, is in how much better position Cyber specialists are in the area of Critical Infrastructure. That must be attributed to the above average alignment of the business needs of critical infrastructure (integrity and availability) and cyber security fundamentals. I heard with elevated interest the presentations from Menno Vlietstra and Sergey Galagan who very eloquently explained what led them to the setting up of a Cyber Security department in their respective companies. I did not find a single panel or presentation to be boring or of inappropriate level, while I very much enjoyed the presentation delivered by Franky Thrasher on Threat Intelligence. A fascinating subject for sure.

It was also very interesting to me to hear from Bobby Singh about how the Toronto Stock Exchange has its ERM function intertwined with cyber risk, during our panel discussion.

Overall it was a very well organized conference, in a good location and with all presentations being well above average and to the point.

Tagged in : conferences, security, presentations

Fighting bias in security analysis

Fighting bias in security analysis

I am a huge fan of automation; I strongly believe that automation, machine learning and / or artificial intelligence (whatever these terms mean for different people) are our best chance to tackle one of the biggest problems we have in the cyber security industry: the human limitations.

As the amount of transactional and log data generated and in need of analysis is growing, the option to throw traditional resources to the problem (in this case: human minds) has disappeared for more than a decade now. I know that there are many companies not using any analysis automation tools even today, but that is a different situation. These companies are not going anywhere - they are just set to find out their breach details some years later. If they’re lucky, before they hit the headlines.

Most companies though are utilizing machine analysis and correlation to cut the noise. That is the right decision, but one needs to be aware of some potential issues. These issues are enhanced when machine learning is used, due to the lower number of human minds involved in the process.

Human - performed analysis

No matter what technology one uses today, there is always human involvement in a significant degree. This involvement is on the definition of the rules of what is important and what not; which is exactly the place where human bias comes into place. Any triage function is after the alert is generated, but what happens if an alert is never generated because the analysis system is biased?

I recently came across Buster Benson’s cognitive bias cheat sheet. Except for the fact that it’s an interesting and refreshing reading, one can so much correlate to cyber security log analysis bias that it’s scary.

Machine learning / machine - performed analysis

There are two major models of machine learning: supervised and unsupervised. You may learn more about these models by reading several articles: I suggest this one which is appropriate for all levels of understanding and depths of prior knowledge. The caveats presented apply at a different level to different models. But the bottom line is: they apply and the more we keep that in mind, the better prepared we are.

Too much information

As there is too much information available, us humans tend to only notice outliers and changes in patterns. We only notice when something significantly changes. In addition, we tend to ignore information that contradicts our existing beliefs.

Take that into the log analysis. Any rules one may design will have the same disadvantages:

  • Brute force attacks and spraying attacks may be identified through thresholds - the equivalent of changes in patterns and outliers
  • Repeated alerts that are considered to be false positive will be snoozed and ignored instead of analyzed every time - the equivalent of confirmation bias. I would argue that spraying attacks in the past were being ignored due to the time distribution of the events that wouldn’t meet the thresholds.

Not enough meaning

We need context to make sense of what we experience and see. Humans tend to assume behaviors based on stereotypes and generalizations. We tend to overvalue historical information, and we round / simplify numbers and percentages.
Of course, we think we know what the general context is and what others are thinking.

Put that down to rule analysis and you will see the similarities:

  • Contextualizing the information means we enrich and correlate data based on assumptions on how systems interact, how these communicate and what we know
  • Historical information is used as a basis to highlight patterns, ignoring the fact that the historical information may be giving us a wrong pattern altogether

Not enough time

Overconfidence, triggered by our need to act fast, may impair our judgement. Humans tend to focus on events happening in a short time span, either in front of us or in the past. We tend to use the safest option, the solution that is less probable to cause problems and so simple solutions are often preferred.

When we implement rules to analyze security logs, we act in a similar way:

  • We may set up historical trend analysis to be based on not very distant past
  • Often, we do not create models to be enriched over a long time span, due to the need of immediate response
  • Most companies tend to fine tune for minimization of false positives in order to lower the probability of business disruption
  • As complexity is the enemy of security, we opt for simple solutions as often as possible

Not enough memory

Since we humans cannot remember everything, we tend to keep in our memory what is repeated. In that way we reinforce our memories. At the same time we opt to ignore specificities in favor of generalizations, and we tend to summarize the information, creating a more abstract model.

Does that sound familiar to what we are doing when managing logs?

  • Usually systems provide two storage pools: the log storage and the event storage. Event storage - and specifically storage of positive events - tends to be longer than log storage and any subsequent events will look into event storage for correlation and enrichment.
  • Connectors, descriptors and generalizations are used by some systems to ingest logs from different sources through a (usually least) common denominator.

The problem with biases

Human mind biases are still a subject of discussion, study, analysis and argument. The truth is that people who have fewer biases usually lose in debates against people who have more biases; simply because the more biases one has, the less probable it is for this person to change their mind. There is no simple solution to that, except of course education, education and education.

In security log analysis some biases are even positive, wanted and intentional. One of these is the effort to suppress false positives.
In the big picture, a security analyst or a team that will develop rules for event analysis, needs to come up with log correlation, thresholds, patterns and alerts. These people should be well aware of these potential biases and try their best to avoid falling for them during the design phase. The reason is that what gets into the design phase reinforces the biases; making escape almost impossible. That would lead in security events go unnoticed.

It is also very important to have an independent review of these rules quite often. A third party, a new employee, someone who was not part of the previous rule definition has the potential to offer a fresh view, hopefully eliminating some of these biases.

Finally, when you look for your next SIEM solution, you might want to ask for the implementation team provided by the vendor to be a new one, consisting of people who have never worked together before. This would be one way to avoid human biases such as Anchoring, Choice-supportive bias and being bitten by the Continued influence effect among others.

Other ideas to overcome bias in security event analysis?

Illustrations either original by @buster or slightly adapted

Tagged in : security, management, artificial intelligence, machine learning

Securing administrative access with MFA

Securing administrative access with MFA

Now that multi factor authentication is gaining ground I thought I would write a simple guide on how to secure administrative access with MFA on Linux systems. The solution is simple and based on Google Authenticator. The good thing with Google Authenticator is that it’s a typical TOTP/HOTP solution and as such does not require any internet connectivity on either the server or the client.
The configuration examples provided are more or less appropriate for openSUSE Leap 15 and Ubuntu 18.04 LTS

I assume you know how to set up Google Authenticator on Linux. Most of the time it’s just something like “apt install libpam-google-authenticator” (on Ubuntu) or “zypper in google-authenticator-libpam” (on openSUSE). After you install it, you need to run -as every user- the command google-authenticator to generate the seed and configuration. Then you add this seed to your mobile phone’s Authenticator app of choise (such as Google Authenticator for either Android or iPhone. At this point you have set up and working the ability to use one time passwords - but they are not used anywhere yet.

In this article I will focus on which services we need to modify in order to securely protect privileged access.

If you have more than a few servers and a few users, I suggest investing in a proper IAM solution. Having said that, the design I provide should be compatible with all IAM products. You may opt to use it if they don’t have a similar design on their own or if you want to avoid their suggestion (such as sharing keys and passwords between users which I find a terrible idea).

This design can easily be part of a centralized configuration management solution such as Puppet, Ansible, Chef or Salt, CFEngine and SUSE Manager.
Beware: storing the seeds and recovery codes as plaintext in a central location means that your central repository becomes super critical - if it wasn’t considered such already. Make sure you include that information in your threat model.

Why secure administrative access

According to an article from 2016 the misuse of privileged credentials was involved in 80% of data breaches. That number only went down to 74% recently. I believe that is enough of a reason to secure your privileged access. Actually almost all the solutions around Identity and Access Management focus specifically on this - the protection of privileged access. There may be different approaches and thoughts about what privileged credentials are, but as I’m talking about Linux here the idea is to protect the root account.

Do we also secure non-administrative access?

There is a simple question many people ask: Why not enable multi-factor authentication always and for everything? Although multi-factor authentication has significant advantages over single factor, it’s not always necessary. Imagine this simple scenario: You connect to your network through VPN and from there you can only get into a bastion host before you login to a server - as an unauthorized user. I cannot help but wonder: How many OTPs will you use, and with how many different seeds? And how confident are you that you won’t lock yourself out if you use different seeds? Again, it’s about your threat model. If a threat scenario is that your MFA seed is stolen, you may ask for an OTP at every other command without improving your security the tiniest bit!

I always suggest having strong access control while keeping things reasonable and operations usable. You may -as an example- skip MFA on console logins, provided that you have proper physical access control. I for one, setup my SSH access with two options:

  • I can login using public key authentication as a single and only factor. My key is password protected as it should, and is actually stored on a smart card
  • I can login using PAM which mandates multi factor authentication with a combination of password and an OTP provided by Google Authenticator

I believe these two options are more or less equally (and in my case, adequately) secure.

PAM Configuration for MFA mandate

When we installed the google authenticator binary, we installed a PAM module as well. This Google Authenticator PAM module provides functionality only for authentication. In most Linux distributions I’m familiar with, there is a common-auth file which defines the common authentication functionality. This common file is then included in all (or most) service files. Simply meaning, if we want to enable MFA globally we need to make our changes on this common-auth file.

My suggestion though is to not enable it globally. As it requires a per user set up, you may break things such as su for example. A global enablement will apply to the root user as well and this is not something we want at this point. We do want to protect the root user, and we will do, but not by enabling MFA globally.
I also suggest you start enabling Google Authenticator in specific services after all your physical users have initialized it for their use.

Useful PAM details

PAM modules works in a stackable way. One may add several modules and depending on their control method the authentication is considered successful or not. There are four 4 different controls:

  • sufficient: if the module succeeds then that’s enough, you are authenticated. If the module fails, PAM moves on to try other modules
  • optional: if this module is set up alongside others then it’s actually ignored. If it’s the only method set up, then its result defines success or failure.
  • requisite: if the module fails, authentication fails immediately. No other modules after that are even tried.
  • required: if the module fails, authentication fails. But not immediately. PAM will move on to try all the other set up modules. That is the preferred options for MFA setup. First you check the user’s password. No matter if it succeeds or not, you move on and try the OTP. This makes the google-authenticator pam module work as Multi Factor authentication and not Two Step Verification.

Some other MFA implementations (cough MS-O365 cough) only request the OTP after successfully confirming the user’s password. This setup actually gives a potential attacker the opportunity to work on cracking the password first before moving to solving the OTP issue. It also enables potential attackers to perform a DOS by trying different passwords on different users and locking your users out.

Using PAM you can avoid the first problem and mitigate the second by configuring the control as required. The attacker is thus forced to guess and/or crack/bypass both authentication factors at the same time because there is no information if one of them is correct


Considering you have initialized Google Authenticator for the users as per the documentation, you need to enable it. In the file /etc/pam.d/ssh add after all other auth lines, the entry (in one single line):

auth required pam_google_authenticator.so

I assume that your common-auth file is included in the ssh PAM file and that it contains pam_unix.so (true for both openSUSE and Ubuntu).

Configure sshd appropriately. In the /etc/ssh/sshd_config file you should have (among others) the following entries:

PasswordAuthentication no
ChallengeResponseAuthentication yes
UsePAM yes
AuthenticationMethods publickey keyboard-interactive:pam

In an essence, this is how this ends up:

This configuration allows either single-factor public key authentication or PAM based authentication which we just set up to be multi-factor.

SU, SUDO and Configuration changes

If you want to use su to switch to root you may, but then you’re running into issues with the Google Authenticator seed. As Google Authenticator would require root’s OTP to be provided, it means that you need to provide the seed of the root account to all your administrators - something that I don’t advise.

Instead I suggest you disable the root account (as Ubuntu already does) and use sudo for performing administrative tasks. There is no limitation and if you want to revert to a su like functionality you may simply sudo -i. The difference here is that sudo does not authenticate root, it authenticates the invoking user. That is the default configuration and although you may change it, I suggest you leave it as it is.
In that case Google Authenticator will require the user’s OTP, so every user will just provide their own OTP.

sudo is significantly more flexible and configurable than su. It can provide fine-grained access control and has one additional benefit: it has a parameter called timestamp_timeout which defines for how long will sudo remember that the user is authorized. This, if configured properly, can actually minimize the annoyance of constant asking for an OTP.

The problem with using sudo though with the default configuration of Google Authentication is that if the account gets compromised and the attacker is already in the system as unprivileged user, they can read the seed from the .google_authenticator file itself and generate an OTP with nothing stopping them.

To the rescue, Google Authenticator module has an option that was designed for encrypted home directories but works nicely in this case too. You can have the users’ .google_authenticator files somewhere else than their home directories, and not even readable by the users themselves.

I suggest moving these files to a directory under /etc (let’s call the directory /etc/gaseeds) which will be root-owned. The permissions need to be 0600.

To do that you need to change the configuration of Google Authenticator and add two parameters: the path and the owner. This change has to be implemented in all the services you have set up to use Google Authenticator. The new entry should be (in one single line):

auth required pam_google_authenticator.so user=root secret=etc/gaseeds${USER}/.google_authenticator

Bonus side effect; any user who has not setup Google Authenticator will not be able to sudo before they set it up and an authorized user moves their .google_authenticator file in the appropriate directory. Although we can control sudoers from the appropriate file, an extra control mechanism doesn’t hurt.


What I describe can be set up in less than 5 minutes. What we managed to do in these 5 minutes is to ensure that we enforce MFA for privileged access. The steps were:

  • Allow single-factor public key authentication to SSH
  • If that fails, request multi-factor password+OTP combination for SSH
  • Allow the authorized users to sudo with their password+OTP combination
  • Ensure that the OTP seeds are only readable by root

Using a proper configuration management tool you can push this configuration to all Linux servers in your environment and use the same Google Authenticator App to login and sudo to root on all of them.

Thoughts and improvements are welcome.

Tagged in : linux, security, access control, two-factor authentication