Zero Trust

Drivers for a Zero Trust Architecture, part the second: How old security fails in the face of new threats

Mar 18, 2022
Drivers for a Zero Trust Architecture, part the second: How old security fails in the face of new threats

This is the third commentary in the series “Defining Zero Trust Security.”

Zero trust represents the present and future of enterprise cybersecurity. Change is driving its adoption, whether that change is in response to outside events like a pandemic, motivated by new work technologies (like the cloud), or fostered by proactive digital transformation. Zero trust adoption success -- both at a micro and a macro level -- requires a comprehensive understanding of its genesis.

Legacy security: unsafe, unsecure, and woefully entrenched in corporate IT
With all this change -- hybrid workplaces, accelerated innovation, and new technologies -- one aspect of corporate IT remains stubbornly unchanged: data security. On-premises legacy security infrastructure, often referred to as a “castle-and-moat” approach, continues to hold sway over enterprise security leaders.

The perimeter-based security model was developed in the 1960’s to secure closed network environments. It’s still commonly used today, even though most work is now performed on the internet, outside the corporate network, and outside the purview of IT security. Castle-and-moat security is no longer appropriate for securing enterprise work: It’s only as strong as its weakest point (say, employee vulnerability to phishing emails), and once network security is breached, every system connected to the network is put at risk.

The model leaves IT leaders in the dark: They often have no way to measure the “secureness” of their environments. Most don’t know their companies have been targeted for cyberattack until they see evidence (like ransomed data) after the fact.

It’s also costly to maintain. Hardware-based security doesn’t scale, and relies on fixed-bandwidth capacity: IT leaders must plan and procure for estimated max-bandwidth connections. Spikes bring systems down quickly.

Why does such an archaic and unsecure model endure? For one thing, proactive, ahead-of-the-curve change is hard. Digital transformation requires commitment, vision, flexibility, and an open mind. When the status quo is more than a half-century old, “doing things the way we always have” is easier than taking a (perceived) risk on a new security model. (Sadly, it’s endangering enterprise cybersecurity and putting data assets at unnecessary risk.)

Second, IT leaders are often averse to digital transformation because they (naively) view it as a way of ceding control. Such “box-huggers” are afraid to relinquish ownership of “the appliance down the hall.” That control is just an illusion. In reality, appliance hardware-based security bottlenecks performance for all, and represents a cybersecurity risk to the company. Ironically, with hardware, IT leaders cede control over cybersecurity to outside hackers.

Finally, legacy infrastructure puts enterprise IT leaders in the business of cybersecurity. They must keep up with the latest patches, and (try to) stay one step ahead of well-funded cybercrime groups. If your company makes widgets, wouldn’t you want to steer funding to making a better widget instead of fighting cybercriminals on their home field?

New threats, new adversaries, and new ways for enterprises to get (badly) hurt
The nature of business risk has changed, and it’s because legacy security has more holes than Swiss cheese. With the prolonged entrenchment of outdated legacy security models has come an easily-predictable and completely understandable rise in cybercrime. 

Hackers are launching cyberattacks on enterprise targets with more frequency, breadth, sophistication, and severity. The real and potential costs to enterprises (in monetary, operational, brand, intellectual property, and even human terms) are significant. Yet many organizations have responded with ineffective security bandages, fatalism, or worse, complacency.

Attacks have become more consequential, and threat actors have become more professional. The notion of the lone hacker working in a basement isn’t accurate anymore. The cybercriminals attacking enterprise organizations across the globe are organized, and often state-sponsored. Advanced Persistent Threat (APT) groups have popped up in China, Russia, elsewhere in Eastern Europe, South America, North Korea, and even Vietnam.

Malware comes in many forms -- spyware, adware, cryptojackers, data exfiltration malware, worms -- but ransomware receives the most attention. For hackers, cyberattacks can be extremely profitable. Ransomware has become so lucrative that it’s generating new business models. Ransomware groups are even “double- or triple-dipping,” extorting one ransom for unlocking encrypted data, another for “not” selling that seized data on the dark web, and then profiting again...by selling that seized data on the dark web.

Ransomware is troublingly easy to deploy. It’s often said that an enterprise is only as secure as its weakest point. Hackers constantly seek out vulnerabilities in legacy corporate networks typically with automated programs crawling the internet looking for an enterprise opening. That exploited vulnerability might be one individual employee responding to a phishing email. It could be a weak password. Perhaps an exposed IP address. Or an overtaxed IT department one week behind on hardware security patches.

Traditional hub-and-spoke networks invite attack. Threat actors can find them, compromise them, move laterally within them, and steal data. (See Figure 1 below.)


Figure 1. Legacy hub-and-spoke networks “protected” with castle-and-moat security create openings for threat actors to attack.

Cyber attacks won’t go away until business leaders take action to counter threats and remove cyber attack incentives. And that requires a new way of thinking about cybersecurity: zero trust.

The genesis of Zero Trust Security
In 2010, Forrester security analysts outlined a new approach to cybersecurity, one that took it out of the realm of network-perimeter controls and applied it to all data in motion. Zero trust was different: It was the first security framework that was data-centric. Zero trust posits that all data traffic from anyone, to anywhere, via any route is potentially hostile, and challenges data legitimacy at every stage of travel.

Zero trust, in principle, was intended to isolate the true contributor to risk: trust, or rather, the granting of trust. Trust could theoretically be quantified and then minimized. The premise was to minimize that granting of trust (“I’m 80% certain you’re not a threat”) everywhere it could be done. This would increase security and reduce business risk.

In a zero trust environment, access is no longer based on machine identification, but on identity, which serves as the new basis for conditional access. Identity underpins Zero Trust, and is the best way to manage secure connectivity to applications, destinations, and resources to protect the modern enterprise workplace: It’s better to authenticate a person than it is to identify an easily-spoofed machine.

When zero trust was first introduced, legacy infrastructure of the time couldn’t scale (nor be easily reengineered) to support dynamic challenges of trust associated with data in motion. In a legacy environment, access is granted via hardware gateways. More “challenges” means more (expensive) hardware stacks housed close to users, applications, and data-processing. Now, more than a decade later, the cloud -- the scalable, abstractable, secure, SASE-enabled cloud --  makes zero trust practical in the form of a Zero Trust Architecture (ZTA).