Supply Chain Risk Management is the name of a big security problem in the business world. It is so important that there isn't a single security framework that doesn't include Supply Chain Risk Management in its agenda, guidance, and suggested controls. NIST has a set of resources on the topic, but it is not the only organization that is addressing this problem.
Disclaimer: Nothing below should be taken as a criticism of the services offered. Pointing out their flaws and inefficiencies does not mean they don't have any value.
The problem is not new of course. I remember a post I wrote in 2017 about CCleaner and the limitations on how companies can defend themselves against attacks like that. Since 2017 a lot has changed,and most companies now try to be proactive; which means they try to stop such problems before they happen. How has that model worked for Okta, for customers of Solarwinds and Kaseya and all other victims of supply chain attack? Not at all.
When a problem comes up in business, as usual, more than one solution will be given. In that case, all of the solutions are very similar, and they don't work at all!
Third party risk reports
The so-called "third party risk analysis" is one solution that has come up and has a pretty big market. A guess about the risks of the third party based on the third party's presence on the internet. So many companies, like Black Kite, BitSight, RiskRecon, and SecurityScorecard, offer this service. What their marketing materials do not say, though, is that this way of guessing is inherently flawed.
The public presense of a company, and the vulnerabilities in the company's assets, can only be accurate in just a few limited cases. What happens if a company has outsourced its websites to a different company? Simple; the results you see are not for the company you think they are. What happens if a company uses linux servers? Simple; the vulnerabilities you see may or may not be there, as the package versions of the software do not change with security updates. But even if we ignore these inherent flaws, in most cases with the probable exception of SaaS companies (and not all of them), the website team and the product team, i.e. what you buy from this third party, are completely different.
This limitation is not only about the websites, but also about cyber-squatting that these services analyze, leaked credentials, mail server settings etc. I even saw one of these companies giving information for leaked credentials, highlighting this as a major security issue. Only it was a widely known leak, that one can find in HIBP, which did not contain passwords and had no relationship to the company in question. The only link was that some employees of the analyzed target had used their corporate email addresses to register to the breached company.
Overall, this guess work has so many parameters that can go wrong, you can bet the majority of the findings will be inaccurate.
Insurance companies are among the companies that use this method, which is a shame. A very wrong way to solve a problem that makes money for the companies that sell this service, adds work for the companies being looked at, and gives almost no assurance to the people who buy these reports and analyses. Maybe they have realized it, and that is the reason they are getting, one by one, out of the cyber insurance business.
Speaking of unnecessary effort, another method by which businesses attempt to handle the third-party risk management problem is by requiring their suppliers to complete security questionnaires. There are several interesting dynamics there that I will attempt to characterize based on my "what were they thinking?" theory:
I will mandate your security controls
Most questionnaires include fascinating questions, such as remarks concerning antivirus signatures and password complexity. Nobody truly considers what will happen (and will actually happen) if two consumers have opposing requirements (implicit and phrased as questions in this case). One client requests that passwords expire every 90 days, while another requests that they do not expire at all. The first asks what was regarded usual a decade ago, while the second asks what is currently recognized best practice but is not widely followed in many businesses (and should not be considered in isolation). You are in serious danger if you are on the receiving end and try to answer honestly. Yet, if you are on the sending end, you have already placed your vendor in a situation where they must choose between losing contracts and not being totally honest with some clients. The difficulty is that risk management remains the vendor's responsibility; as a client, you have no reason or power to impose your preferred technological controls. Statements addressing data retention, backup procedures, password security, access controls, and so on are commonly seen in this area.
I will control everything you have
In many situations, I have seen questionnaires that need / ask (again, implicitly) the vendor to provide access to their logging systems (typically SIEM) in the event of a security incident. Typically, there will be several paragraphs below or after, stating data isolation, who has authorisation to access customer-related data, and so forth. Which are paradoxical phrases in and of themselves, because if you need to offer access to your SIEM, it will contain security events from all of your systems, as well as customer-related (meta)data (filenames, usernames etc). By disclosing this information to a third party, you are breaking your contract with all other customers who have the same conditions. But, you argue, event data is not consumer data. And I would like to know whether you are ready to dispute this in court following a security incident.
I will do what is not my job
The majority of the questionnaires ask for standardized controls like vulnerability management and access control. Nice intentions, however if the firm is accredited against one or more industry standards, one would anticipate that these controls, not only their inclusion in the policy but also their actuality, have been audited. Yet, while we all know that compliance and security are not the same thing, and that most compromised firms have a slew of certifications, making the value of these audits relatively low, the problem persists: If a (certified and trained) auditor did not discover the gaps, what makes one believe that a junior or contracted security analyst who merely follows a questionnaire will have a greater chance of spotting them?
I do not even know what I want
I have gotten so many queries concerning cloud services while the only item sold is on-premises software. I have gotten so many queries concerning software development processes when the only thing on offer is professional services. It appears that surveys are becoming increasingly generic, with no one bothering to tailor them to the use case. This not only adds unnecessary effort to the recipient's plate, but also serves as further confirmation that for the sender, this is merely a checkbox exercise that does nothing to address third-party risk management.
Why questionnaires do not work?
The major reasons that questionnaires do not function are two: the binary character of the questions and a lack of responsibility in the company that submits the questionnaire.
Most questionnaires I have seen had binary questions. For example, do you have antivirus software or not? Things are not so straightforward, though, because we all use a risk-versus-feasibility perspective. As a result, the provider may not have antivirus on all computers, omitting, for example, Linux systems (where the available antivirus products are of so low quality, that provide almost zero security). Nonetheless, if one must respond with a yes or no, one should expect to get information that is not entirely true.
The other argument, responsibility, is even more intriguing. There has not been a single organization I have worked for, where at least one security questionnaire did not reach the security department for evaluation. A commission-hungry salesperson will answer favorably on anything. I usually presume good intentions, but most individuals outside of the security department do not know the issue in depth, so they present a lovely picture that may not be realistic.
Most of the surveys I have seen recently are to be inserted as addenda to the contract between the two parties. In certain circumstances, there is no questionnaire at all, only a list of prerequisites. Some of the questionnaire difficulties apply here as well, such as password requirements, vulnerability remediation timescales, notification requirements, and so on, but I have already talked about them. Let us proceed and check if the contract terms work.
That strategy has potential, but there are two issues. First and foremost, there is no genuine negotiation. This material is too unclear and specialized for the legal departments who draft the contracts. Security phrases are typically merely a list of controls with no evident relationship to a risk, thus they do not make much sense to individuals who are not experts in security. This results in a broken and indirect interaction among the two security departments, the seller's and the buyer's, via the legal or sales departments, with the security department blocking the business on both sides. Unfair? You bet. Real? Well, yes and no... Finally, the contract is signed with the conditions that were agreed upon after lengthy talks by individuals who have no idea what they are discussing or why. What happens if a disagreement cannot be handled practically? I would only state that I have never witnessed a sale not moving through, because the seller's security department claimed "we cannot do that".
The second problem is that the contractual terms may end up in liabilities and damages. Ah! this is a different thing, because lawyers do understand that. As a result, the seller is generally just accountable for the sum of the yearly contract, and that is it. It is generally less expensive than adopting all security controls requested, and hence, from a risk management standpoint, it makes sense to consent to everything and implement nothing if the implementation cost exceeds the liability. As a client, I have worked with suppliers that have obviously followed this method, and despite the fact that their security posture is obviously horrible, they have no difficulty agreeing to everything you throw at them.
It does not help that some of the requirements are impossible to accomplish. I noticed one that asked one of the organizations I worked for to notify any antivirus alarm, including handled alerts, within an hour, by phone, i.e. what the antivirus would have stopped. Yeah, the sales staff accepted (against my recommendation) and never notified anybody. Surprisingly, no one on the other side ever questioned that, proving that they were unaware of what they were asking.
Finally, is there a solution?
There is a major industry called "certifications" out there. Let me state unequivocally that I do not trust certifications. Not just because they are primarily concerned with a single moment in time, but also because the auditors are frequently deficient. Some certifications, such as ISO27001, are risk-based, while others, such as TISAX or (the upcoming) CMMC, are maturity-based, and I favor the latter.
Certifications in their current form mainly benefit auditors. There is no assurance about anything. Yet this does not have to be the case. Audits have the potential to improve. This, however, will not change until there is a motive.
A comparable example of mismanagement being resolved may be seen in the past. The Sarbanes-Oxley Act of 2002 is a federal legislation in the United States that requires particular financial record keeping and reporting methods for companies. Although not ideal, most people believe that SOX forced firms to be more transparent about their financial practices under the threat of criminal charges for officers of the company.
A SOX for Cybersecurity?
Misrepresentation is not addressed in any regulation since there is no standard that defines what is adequate. GDPR has provisions for fines, which may be summed up as "if you didn't do your job as you should, you'll pay a fine" allowing Data Protection Authorities to determine what is acceptable in each case. Regrettably, GDPR only applies to private data and does not cover breaches of non-personal data, but this is the best we can do for the time being.
Both the US government and the European Union are working on CyberSecurity regulations. The rules I have seen in draft materials are pretty stringent and unambiguous, but the sanctions and the baseline against which one would be assessed will be significant. If this is done, the deception - claiming that one does everything on the basis that liability is limited to the amount of the contract - would no longer be such a simple solution.
Isn't this victim blaming?
GDPR has once again wonderfully paved the way. Nobody likes to penalize or blame a hack victim. On the other hand, if the victim does not take the necessary precautions to reduce the likelihood or severity of the incident, they are and should be held accountable. If applied to all data breaches, I believe an approach similar to GDPR would work.
Meanwhile, the contractual restrictions on responsibility must be rationalized. The GDPR addresses this as well, with the option for class action suits, which can also address the issue of misrepresentation.
Image by master1305 on Freepik