Personal Privacy vs Acceptable Risk

Many enterprises subscribe to the notion that breaches are inevitable, and therefore they categorize some risk as ‘acceptable risk.’ Meaning that the cost of prevention outweighs the regulatory fines or the remediation costs.

Not a terrible strategy in the right instances, but how can you ensure a given breach will be so easily contained, cleaned up, and paid for?

When assessing risk, think about incidents as “public” versus “private”

Consider a targeted attack on your back-end infrastructure, perhaps starting with a phishing attack on your employees to steal elevated credentials, followed by a system breach, lateral movement around the network and possibly a successful data exfiltration.

While this describes a pretty nasty incident, it is ‘private’ in that you largely control the conversation around what happened, to whom, when, and how.

Private incidents also have a unique characteristic in that (in most cases) your own enterprise is the first to learn of the issue…so you, within regulatory requirements, have a bit more time to ensure the details are clear and the messaging is tight when you carefully and specifically disclose the details.

Now consider a more public incident. One of your website vendors was compromised and is now actively serving malware to your site visitors. In some respects, this doesn’t even sound as bad as the previous, private example. But a couple of things make this pretty bad for you.

Hard to detect; harder to report.

First, it is very often someone outside of your organization, perhaps your own customer, that detects the issue first. So ask yourself, what percentage of my customers would know, and subsequently attempt to let me know, when my site is compromised?

This can be very industry-specific and some customers are much more insightful and proactive than others. But it is highly unlikely the first person to be impacted will call you. So, how many users are infected in the interim before anyone notifies you?

Second, I encourage you to call your own company and attempt to report an issue related to your website. If this isn’t a wake-up call, it should be! While many companies, particularly those doing e-commerce, will have a published customer support number to call, many do not. Even a well-intentioned customer may simply give up trying to determine how to reach you, while the problem remains unknown to you, continuing to grow quietly.

But let’s assume they persist.

Who answers the call internally and how do they escalate it? Do you have an internal process for escalating reported security issues? Like the supermarket that checks the ID for anyone buying alcohol, even if the customer appears to be well over 21, have a specific process so your customer service personnel aren’t trying to make a judgment call…and further delaying the reported incident.

And if you outsource your website security, ensure someone on their security team is available and reacts quickly and appropriately when an incident is reported.

Remember that every reporting delay means the incident is affecting more people outside of your control, and like a gas leak can suddenly go from a bad smell to a real disaster.

The primary reason that ‘public’ incidents should not be classified as ‘acceptable risk’ is because you don’t control the discovery or the disclosure! Instead of contacting you, one of your impacted customers could contact the media; or they could BE the media; or they could be a class-action attorney looking for a project.

Damage control for a public incident is exponentially more involved, and as a direct result more costly both to contain and to remediate.

Why has such a big problem flown under the radar for so long?

There are two reasons why this business risk vector has grown so large, right in front of our collective faces.

First, websites have evolved significantly in the past decade. Previously, a majority of an enterprise website was built in-house, with just a few third-party code libraries used to measure page loads and other performance metrics.

Most of the website code existed in the data center and legacy scanning tools could quickly ensure safe coding practices.

But today, most modern websites rely on third-party code to present content, target marketing campaigns to specific users, serve video, manage customer interactions, and so much more.

In fact, the entire model has turned upside down. The majority of the code that makes a website work today is no longer owned by the enterprise, doesn’t exist in the data center, and executes completely outside the purview of the IT and Security Teams…in the browsers of the site visitors!

So while many enterprises still scan the data center code, they are in effect addressing a much smaller percentage of the overall code base that executes on their users.

This false sense of security creates a security and compliance blindspot that can cost you significant revenue, profit, brand equity, time, or all of the above.

The second reason why this issue has grown so large is the sheer challenge of addressing it.  Many third-party code executions are using advanced targeting techniques to only run in certain user instances, making them very difficult to monitor across all customer types.

Bad guys know this.

Malware uses many of the same techniques to target their attacks as well as to hide from detection. In some cases, in order to limit the chance of detection, malware will initially present to a very narrow audience (for example, only women who use a Macintosh and live in Ohio) until backend servers and communications are working and tested; then they’ll broaden their scope to infect a much wider audience. This narrow focus is easily missed by security teams when they don’t look like female Mac users from Cleveland.

How best to evaluate your own level of risk?

Depending on how dynamic your website is (presenting different content in different ways to different users, devices, OSs, browsers, geographies, etc.) directly impacts the complexity of the solution.

But you would start with a very basic approach, emulating as many different user types as necessary to reflect the broadest demographics of your site visitors.

This will result in the most third-party code executions and a truer picture of what really happens to your visitors. Visit your own site at scale using all of these personas, and run all the code, view all the images, and generally “experience” the site as your visitors do.

(See below for how Privacy101 can help you test your websites.)

Be on the lookout for known vulnerabilities, obfuscated or modified scripts, rogue domains, cookie drops and pixel tracking, and any other malfeasance. It’s really a numbers game…the more of the above complexity you have, the larger your risk profile and the less “acceptable” the risk should be.

Oh, and do this monitoring from outside your own IP address blocks. Bad guys also know how to hide from traffic that likely includes your security team.

Summary

In the end, classifying risk as “acceptable” shouldn’t only consider the likelihood of any incident, but also the magnitude of any single incident. An unmonitored website provides a broad attack surface for very “public” incidents and as such should not be classified as acceptable risk.

Not fully understanding this key attack vector is–in itself–an unacceptable risk.

Contact Privacy101 today for a complimentary website risk analysis so you can better understand your own website’s risk profile, along with a list of key questions you should answer to fully quantify the level of risk your own site represents…and just how “acceptable” that risk is for your business.

© Copyright 2020, Privacy101, LLC. All Rights Reserved