Time for Change: Moving Towards Zero Trust
29 April 2020
EVEN THOUGH most IT professionals accept that perimeter-based security is an inherently flawed concept, why has the move towards zero trust been relatively slow? Part of the reason, asserts Phil Allen, is that most businesses are risk and change adverse (even more so when it comes to security) and these types of projects still require significant resources.
The recent worldwide health crisis has forced organisations to move staff to home working and exposed not just security issues, but also the limitations and bottlenecks around legacy secure access workflows. From a technological standpoint, deploying an identity-centric zero trust approach is relatively straightforward, but sometimes the organisational and change management side is often much more time-consuming.
The concept of perimeter security has evolved from the military where it was assumed that, with sentries on watch for danger meant, everything within an encampment was safe. In the digital world, access to the corporate network is guarded by firewall and VPN access, but this notion is no longer valid. This ‘safe network’ proposition is unsound as, according to the 2019 Data Breach Investigations Report, around one third (34%) of all breaches happened as a result of insider threat actors.
In addition, the overarching concept of a perimeter is starting to dissipate as organisations increasingly move towards a hybrid world wherein applications and data can be on-premise, hosted, in the cloud or delivered by Software-as-a-Service and end users may be geographically spread across sites or, increasingly, working from home.
The perimeter was also traditionally secured via a set of credentials – in the worst case scenario using just a username and password. This old paradigm for access has been to be proven massively flawed as the theft of credentials is a regular occurrence through active attacks, negligence or weak password generation. A survey found that, for parents, one of their children’s names is a Top 10 password choice. It's something that co-workers would probably know and would not be hard to discover through social media observation.
A matter of trust
Instead, zero trust adopts the approach that anything inside or outside the corporate perimeter should not be trusted, but instead must be verified before a connection is made and access to systems is granted. This is not just a one-off process, either. Verification takes place every time for each connection even if it's laterally within the same network.
The verification process will take many forms and may include digital certificates, multi-factor authentications, Network Access Control, Virtual Private Networks and Privileged Access Management along with systems that are ever-watchful of behavioural anomalies such as intrusion detection systems.
Authenticating the user is the backbone of a zero trust security architecture. This means the user should have multiple factors that prove their identity. There are three possible factor categories: something the user knows (like a password), something the user has (like a phone) and something the user is (like a fingerprint). Different user activities should require different levels of authentication. Reading e-mail might only necessitate a password. Issuing a payslip might require a password and proof of ownership of a private key stored on a hardware device.
The devices users are employing to connect are also at risk. Sometimes, valid users can be tricked into doing work on compromised devices. If the computer or phone that the user is working on is compromised, critical enterprise data and passwords will be compromised, even if the user has been strongly authenticated. Device identification and certificate issuance can be leveraged to check whether the user is working on validated hardware that has not been subjected to tampering by those with criminal intent.
Device and application validation
Even if there is a valid user on a registered and validated device, they may still be missing a critical security patch or they might have been conned into installing a malicious browser plugin. In addition, the user might be using an imposter application that's emulating a known resource to harvest login credentials. Any of these cases could allow an attacker into a critical system.
Methods of application validation vary widely. Some things can be accomplished through device management. Others, like the validity of an OAuth client registration, require newer and tougher security standards such as Proof Key for Code Exchange and Token Binding.
With the central tenant of a zero trust approach stating that each connection must be validated and then authorised, a central authorisation engine must judge whether the user can perform a given transaction. The default answer should always be in the negative unless there's enough information to decide. This may involve static rules like “only employees can send corporate e-mail” and a heuristic rule like “only users with a risk score below 65 can view the corporate directory.” A risk-scoring system employs several weighted variables like behavioural biometrics, continuous authentication, location, time and a comparison against patterns of past attackers to determine how likely it is that the current transaction is malicious.
Still early adopters
Even though the benefits are clear, and the technology is gaining maturity on a rapid footing, zero trust adoption is still relatively low. According to an IDG survey from 2018, only 8% of organisations are actively using zero trust (although another 10% are at the pilot stage). The main factor behind this sluggishness is that the perimeter approach is already deployed, relatively frictionless and simple to manage. The fact that it's inherently insecure is easy to brush under the carpet until a breach occurs that prompts demands for improved security.
Another fear is that current employees will be locked out of doing their work due to the new and somewhat more stringent authentication and authorisation policies. Fortunately, traffic can be monitored before the transition to see how many people would be able to accomplish their daily tasks under the proposed architecture. Problems can be rectified before the transition is made to ensure that no-one is locked out.
Many enterprises also choose to dip a toe in the water by building one new non-critical application outside their firewall and seeing how it plays out within a zero trust model. This can be thought of as establishing a micro perimeter. These trailblazer applications can then serve as a template for wider zero trust adoption across an expanded set of applications.
Best Practice approach
Another point worth noting is that zero trust is a conceptual idea and not an industry agreed standard. As such, there are different technical and vendor approaches to achieving its goals.
However, there are several notable and independent organisations that have set out guidance to help organisations improve security. For example, the National Cyber Security Centre has published its Zero Trust Architecture Design Principles which is currently out for feedback. In February, The National Institute of Standards and Technology, itself a sub-set of the US Department of Commerce, issued its own Zero Trust Architecture (SP 800-207) which is also out for consultation. A comparison of both documents offers a significant overlap that can be summarised as follows:
Know your architecture (including users, devices and services), create a single strong user identity, create a strong device identity, authenticate everywhere, know the health of your devices and services, focus your monitoring on devices and services, set policies according to the value of the service or data, control access to your services and data, don't trust the network (including the local network) and choose services designed for zero trust.
Although there's no regulatory requirement for the adoption of zero trust at a national level or through an industry body like the PCI, this position may well change in the future as larger deployments - and especially so those within the public sector - begin to report back on the efficacy of the approach.
Proven and standardised protocols
Another point worth noting is that, although zero trust has no official standard, many of the methods used to connect the various validation, authentication and encryption elements into a coherent process use a set of proven and highly standardised protocols and technologies.
It's also highly advisable to use vendors that support standards such as FIDO2, OAuth 2.0 and OpenID when building out a zero trust environment to allow for flexibility and in order to avoid the negative effects of vendor lock-in.
The rationale to move away from a perimeter and towards a more effective zero trust approach is hard to fault. Well-constructed guidelines such as those developed by the National Cyber Security Centre offer Best Practice methodology. Starting on a smaller scale with just a single application can help teach valuable skills to enable a successful roll-out across any organisation.
Phil Allen is Vice-President (EMEA) at Ping Identity