Zero trust is an important information security architectural shift. It brings us away from the perimeter defense-in-depth models of the past, to layers of control closer to what is valued most – the data. When initially defined by an analyst at Forrester, zero trust was focused on the network providing application isolation to prevent attacker lateral movement. It has evolved to become granular and pervasive, providing authentication and assurance between components including microservices.
As the benefits of zero trust become increasingly clear, the pervasiveness of this model is evident, relying upon a trusted computing base and data-centric controls as defined in NIST Special Publication 800-207. So, as zero trust becomes more pervasive, what does that mean? How do IT and cybersecurity professionals manage the deployment, and maintain an assurance of its effectiveness?
Zero Trust Architecture: Never Trust, Always Verify
Zero trust architectures reinforce the point that no layer of the stack trusts the underlying components, whether that be hardware or software. As such, security properties are verified to assure they are as expected for every dependency and interdependency on first use and intermittently (the dynamic authentication and verification tenets of zero trust). Each component is built or constructed as if the adjoining or dependent components may be vulnerable. As such, each individual component assumes it must be the component to assure the trust level asserted and must be able to detect a compromise or even an attempted compromise.
This can be a bit of a confusing paradigm in that zero trust instills the principles of isolation at every layer. This enforces the point of so-called zero trust between components, while verification of the security properties and identity is continually performed to provide an assurance that expected controls are met. One component may choose not to execute if the expected properties of the dependencies are not assured. Zero trust architectures assert to “never trust, always verify.” This enables detection and prevention of lateral movement and privilege execution for each component and results in higher assurance for the system and software.
Identity, authentication, authorization, access controls, and encryption are among the core tenets of any zero trust architecture where deliberate and dynamic decisions are continuously made to verify assurance between components. While zero trust is often discussed at the network layer as a result of its origin as a concept by Forrester, the definition of zero trust has evolved considerably over the last decade to be a pervasive concept that spans infrastructure, device firmware, software, and data.
Zero trust is discussed often as it relates to the network with the isolation of applications by network segments, ensuring controls such as strong encryption and dynamic authentication are met. Zero trust can also be applied at the microservices level, providing assurance of controls and measurements via verification between services. The granular application of this model further enforces prevention and detection for attacker lateral movement.
Zero trust begins with infrastructure assurance; it has become pervasive up the stack and across applications. A hardware root of trust (RoT) is immutable with a cryptographic identity bound to the Trusted Platform Module (TPM). The infrastructure assurance example instills the tenets of a zero trust architecture. Upon boot, the system first verifies that the hardware components are as expected.
Next, the system boot process begins verifying the system and each dependency against a set of so-called “golden policies” which include expected measurements, attested to with a digital signature using the cryptographic identity in the TPM. If one of the policy comparisons do not match, the process may be restarted, or the system boot process may be halted. While there are several hardware and software-based RoT options, from boot the resiliency guidelines for firmware and BIOS are generally followed in the development of the policies and measurements used.
Attestations are signed by a RoT at each stage of the boot process and are used to both identify the relying components as well as to provide an assurance of trust, thus at the very basic level identifying the system and components are as required. The dependencies may be chained or may be verified individually. These attestations are also provided at runtime, supporting the zero trust requirement for dynamic authentication and access control – in this case, for infrastructure components. Attestations aid in the requirement to verify the identity of components, essential for providing assurance of said component.
Any attacker that has infiltrated the component or software would need to survive this dynamic and periodic verification and authentication to remain a threat. The attacker would also have to figure out how to escalate privileges or move laterally between isolated components that don’t trust each other.
Trusted Control Sets
The Trusted Computing Group’s (TCG) Reference Integrity Manifest based on NIST’s Firmware Resiliency Special Publication provides the trusted controls for policy and measurement of the firmware. As you go up the stack, trusted control sets to provide the verification necessary for zero trusts include the CIS Controls and the CIS Benchmarks. Trusted third parties such as NIST, CIS, and TCG provide a necessary external and established vetting process to set control and benchmark requirements. An example of this would be attestations used to comply with a CIS operating system or container benchmark at a specified level of assurance.
What Evidence Supports this Shift to Zero Trust?
Interestingly, about the same time that zero trust architectures began to take shape, Lockheed Martin developed their Cyber Kill Chain (in 2011). The Cyber Kill Chain was first defined to separate the stages of an attack, enabling mitigation and detection defenses between stages. The MITRE ATT&CK Framework is used more predominantly today with the foundation provided by Lockheed Martin’s model, plus identified gaps learned from use and the evolving threat landscape. For the purpose of this paper, the Cyber Kill Chain will be used to simplify the correlation process, but can be abstracted to the MITRE ATT&CK Framework.
The Lockheed Martin Kill Chain was developed in response to the ever-increasing sophistication of advanced persistent threat attacks (APT) that had shifted to include supply chain attacks. By implementing defenses and controls between attack phases, including requirements to prove identity (dynamically) via authentication, attackers’ lateral movements or privilege escalation attempts could be more easily detected. Moving detection and prevention earlier in the kill chain is ideal to prevent attacks from being successful (e.g., exfiltration of data or disruption within the network).
Applying detection and prevention techniques pervasively in the stack and across applications and functions with dynamic access controls to verify authentication attested components, supports zero trust architectural tenets and enables detection early in the kill chain. The evidence of the tenets of zero trust working is clear when you consider its deployment in concert with kill chain detection controls as evidenced by attacker dwell time patterns.
Reducing Dwell Time
Since the use of the kill chain was first invoked, attacker dwell time (the time an attacker remains on a network undetected) has been dramatically reduced. This can be clearly seen with both the global and regional dwell time changes as different regions adopted the Cyber Kill Chain and zero trust defenses. According to FireEye’s M-Trends annual reports, the global median dwell time was 229 days in 2013, and in the 2020 report is 56 days. The regional numbers support the success of this architectural approach as well, with the known disparity in the adoption of the zero trust architectural pattern and the defense frameworks of the Kill Chain and MITRE ATT&CK.
The United States was known to be an early adopter of both. Selecting 2017 as an example, the median dwell time in the Americas was 75 days and in Asia was 172 days. Smaller organizations or those with fewer resources in any region at any point in time may experience wildly different dwell times from larger and well-resourced organizations. The dwell time numbers do help demonstrate the success of these controls with tangible data.
Zero trust evolved from a network-only definition, where applications were segregated, to a more granular level in support of detecting unexpected behaviors between all components. The logical connection between zero trust and the Lockheed Kill Chain demonstrates the clear value of the models. This also helps to project the future for zero trusts as increasingly data-centric, built upon a foundation of isolated components from boot in infrastructure attesting to their verified identity and assurance levels up and across the stack to the microservices level.
Article Provided By Center for Internet Security
If you would like TSVMap to assist your business with assessing your essential systems and applying the TSVMap methodology to ERP Systems, MRP Systems, Cyber Security, IT Structure, Web Applications, Business Operations, and Automation, please contact us at 864-991-5656 or email@example.com.