According to the “State of Enterprise Secure Access Report 2019” from the US software provider Pulse Secure (formerly part of Juniper), companies in the DACH region and Great Britain are planning to increasingly use zero trust solutions. 52 percent of the IT security decision-makers surveyed want to start a project within 18 months using so-called software-defined perimeter technology (more on this later). The background is the ongoing cloud computing trend.

Photo: carmen2011 – shutterstock.com
In its “Global Threat Intelligence Report” for 2019, NTT Security also believes that it can see that companies are more committed to the trust-nobody philosophy. However, the security provider cites more sophisticated cyber attacks and increasing threats from inside criminals as the reason.
So what is Zero Trust, what are the advantages and disadvantages and how can companies implement it?
The analysts at Forrester already coined the term in 2010. It distances itself from the idea of the Trusted Network within a defined company perimeter and instead starts directly with the data. In 2018, Forrester introduced Zero Trust eXtended (ZTX), a further developed framework that IT managers can use to build their security architectures in accordance with Zero Trust.
According to Mark Wojtasiak, Marketing VP at data security provider Code42, the concept is based on two central pillars:
-
identify sensitive data and map its flow;
-
clarify who, when, where, why and how data is accessed and what is done with it.
This is based on the conviction that companies should not trust their customers, employees or applications either inside or outside the company’s boundaries. Instead, everything and everyone who tries to access company data must be checked and controlled. It is a consistently data-centric approach based on constant monitoring.
In 2014, Google defined its own zero trust variant with a context-based access concept called BeyondCorp (PDF). For the time being, the internet company only used it internally, but in 2019 it began to implement the technology in its services for customers. Various security providers, such as Check Point, support the initiative.
Gartner’s market researchers jumped on the zero trust trend with their 2017 CARTA approach. The abbreviation stands for “Continuous Adaptive Risk and Trust Assessment” and continues the original principle. According to CARTA, it is important not only to check users, devices and apps each time they log in, but to continuously check their trust status during the session. If a change is identified that represents a risk, access to a service can be restricted or completely interrupted.
For Code42 manager Wojtasiak, the core concepts of the CARTA approach are as follows:
-
deploy unique security gates through adaptive, contextual security platforms;
-
Continuously monitor, evaluate and prioritize risks and trust – reactive and proactive;
-
start early with risk and trust considerations in digital business initiatives, already in the development process;
-
Provide full, complete transparency, including processing sensitive data;
-
Detect responses faster with analytics, AI, automation and orchestration, and prioritize risk.
As mentioned above, the “Software Defined Perimeter” (SDP) should be a way to implement zero trust. The technology is based on the “black cloud” concept developed by the Department of Defense’s IT security agency (DISA). In it, network access and connections are set up according to the need-to-know principle.
The Cloud Security Alliance (CSA) describes the concept as a combination of the three parts:
-
Device authentication;
-
identity-based access;
-
dynamically provided connectivity.
“The user sees nothing of the network,” explains Nathan Howe, zero trust specialist at Zscaler, how it works. If someone wants to access an app or a resource in the network, they are authenticated for exactly this and get there directly. Access management is being shifted from the network perimeter to the resource or app, so that users never know where they are in the network.
At the technological level, according to ESG analyst Jon Oltsik, SDP is building on a number of well-known approaches. Next generation firewalls (NGFW), network access control (NAC) or the 802.1x standard offer various functions that enable authentication based on individual attributes. That is why there are numerous providers that offer SDP-like solutions, including Cisco, HP (Aruba), Pulse Secure or Ping Identity.
Since SDP depends on numerous technologies and their interaction, there are a few things to consider when implementing them. From Oltsik’s perspective, companies need to consider the following:
-
The strategy has to start. Authenticating every transaction and encrypting every network session involves a lot of effort. The SDP implementation must therefore be strategically planned from the start. Otherwise operations become chaotic and IT risk increases because a weak component makes the entire SDP infrastructure vulnerable.
-
Strong authentication across the stack. The SDP architecture defines a number of connection types between clients, servers, clouds, etc. Each of these connections requires strong authentication from layer two to layer seven, corresponding methods (such as tokens, biometrics or smart cards), key management for encryption, certificate management and Public Key Infrastructure (PKI).
-
Time-consuming data collection, processing and analysis. To manage something, it must also be possible to measure it. It is therefore important for SDP to collect, process and analyze all available data types. This includes information about endpoints, users, network flows, directories, authentication systems and threat intelligence. In addition, new data types and data sources in the cloud can also be added, which must be included. Different data formats need to be normalized and a distributed, scalable data management architecture established in order to evaluate all this data in real time, if possible.
-
Granular guidelines for use. As soon as a company can use detailed access controls technically, it must create guidelines on how and when to apply them. The goal is to find a balance between permissible risks and interruptions in business processes. Since this requires joint analyzes and decisions by those responsible for business, IT and security, the process can take a long time. In addition, a certain degree of “trial and error” may be required to find the right level.
The strategy by which Zero Trust can be implemented varies depending on the infrastructure and the needs of companies. Nevertheless, security provider Palo Alto Networks has defined a five-stage plan (PDF, download) that companies can use as a guide.
1. Define the surface to be protected
The protected area is defined by the sensitive information that needs to be secured. This is usually significantly sm
aller than the target area. It should be noted that zero trust protection goes beyond data and also affects other elements of the network. When defining the protected area, all critical data, applications, assets or services (DAAS) should be taken into account, in particular:
-
Data: Payment card information (PCI), confidential health information (PHI), personal data (PII) and intellectual property (IP);
-
Applications: both standard or custom software;
-
Assets: SCADA controls, cash register terminals, medical devices, production facilities and Internet of Things (IoT) devices;
-
Services: DNS, DHCP and Active Directory.
2. Mapping of transaction flows
The way that sensitive data is accessed on the network determines how it should be protected. This involves scanning and mapping transaction processes in the network to determine how different DAAS components interact with other resources in the network. These flowcharts show where controls need to be inserted.
3. Establishment of a zero trust architecture
Traditionally, the first step in any network design is architecture. At Zero Trust, this is only the third step. The networks are tailor-made and there is no universal design. The individual zero trust architecture becomes visible when the protective area is defined and the processes are mapped.
According to Palo Alto Networks, the next generation firewall should be used as a segmentation gateway to enforce granular Layer 7 access as a microperimeter around the protective surface. With this measure, every packet that accesses a resource within the protected area goes through a firewall. In this way, layer 7 guidelines can be enforced that simultaneously control and inspect access.
4. Create the zero trust guidelines
As soon as the zero trust network has been set up, the supporting guidelines must be drawn up. Here Palo Alto Networks proposes the Kipling method, which deals with who, what, when, where, why and how of the network.
In order for a resource to speak to another, a certain rule must whitelist this traffic. The Kipling method enables Layer 7 guidelines for granular enforcement, so that only known and permissible data traffic or legitimate application communication take place in the network. This process is intended to reduce both the attack surface and the number of port-based firewall rules that are enforced by conventional network firewalls.
5. Monitoring and maintenance of the network
The final step is to monitor and maintain the network regarding the operational aspects of Zero Trust. This means continuously reviewing all internal and external logs.
Monitoring and logging all network traffic is a key component of Zero Trust. It is important that the system sends as much telemetry as possible. This data provides new insights into how the zero trust network can be improved over time by repeating the five steps.
In the Choice of technologySven Kniest, DACH manager at identity specialist Okta recommends that care should be taken to ensure that the solutions and services can be integrated into the architecture. With firewalls, privileged access management (PAM) solutions, SIEM and CASB products etc., it is important to pay attention to the necessary interfaces to each other. If these are not available, the process behind Zero Trust cannot be mapped.
With the increasing acceptance of the cloud in German companies, there is also the possibility to implement Zero Trust as a cloud-based solution. Elmar Witte, security specialist at Akamai, explains the concept:
Cloud-based zero trust security models rely on only allowing authorized users to access defined applications after successful authentication. In order to implement this, all traffic is monitored around the clock in the first step and access to any resource is explicitly permitted by a central management system.
In the second step, the classic network design based on DMZ (demilitarized zone, an isolated subnetwork as an additional security layer for a local company network) is converted into an “isolated services” approach and the application is completely isolated from the Internet. This is implemented by a so-called Identity Aware Proxy from the cloud.
This approach provides adequate protection for locally deployed applications and apps in the public cloud. The applications are additionally secured via a web application firewall, which stops web attacks and at the same time speeds up the delivery of the applications.
Authentication of access is one of the most important building blocks of Zero Trust. For Kniest, this should work as simply as possible or without a password, since weak access data is a main reason for data breaches. He considers single sign-on (SSO) and multi-factor authentication (MFA) to be central functions to increase the level of protection.
Depending on the role of the user and how critical the data with which he works, so-called adaptive factors are available. Depending on the risk determined (risk scoring), additional Authentication factors consulted for access. This includes, for example, biometric features, tokens or smart cards. Parameters such as the location, the device or the service from which access is made are used for risk assessment.
It is complex to implement such a system across the board. Therefore Kniest advises a step-by-step approach. However, companies cannot avoid making a first fundamental decision for the zero trust approach. Once this has been done, the preparatory work follows Identity and access management To build (IAM):
In order to authenticate identities, they must be known. The first step is therefore a uniform platform for identity management to determine who is basically allowed to access what. The second step with access management is the most important part for authentication. Here it is defined whether and when passwords, other factors or SSO are used.
As soon as the IAM is ready for use, you can build on it with the gradual rollout of Zero Trust be started. Services that from outside the classic perimeter are traditional gateways and should be prioritized. This includes cloud applications, interfaces to suppliers and partners (supply chain) and customer portals. In addition, the focus is on applications and resources that sensitive information such as include customer information.