Home Tech News How IT becomes future-proof | ZDNet.de

How IT becomes future-proof | ZDNet.de

by Tejas Dhawan

Several disruptive trends are currently shaping IT: caused by the corona crisis, the activities of most employees are moving to the home Office and thus on a large scale to the edge of the respective infrastructures. Edge, cloud and mobile lead to hybrid infrastructures. The demands on their availability are increasing anytime and anywhere – after all, home office workers and the companies that employ them depend on functioning, secure end devices, VPNs and applications. If they fail, expensive business interruptions are inevitable.

The infrastructure is under a permanent security threat. Constantly new attack variants encounter company infrastructures that are often not sufficiently secured. Refined attacks (APTs, Advanced Persistent Threats) target the core knowledge and the core processes of companies, with ransomware money should be extorted. A lack of security awareness among employees often ensures that fraudsters and data thieves have an easy job.

Gigamon has developed innovative tools to deliver the right data to the right analysis tool. This means that bottlenecks, malware and all kinds of problems in rapidly growing, heterogeneous infrastructures for a wide range of applications can be recognized immediately, if necessary rendered harmless and eliminated.

The cloud, staff shortages and new threats are forcing investments in cybersecurity

Security investment pressure is evidenced by the IT Cybersecurity Spending Survey 2020 by the market research company SANS. More than 450 respondents from IT and IT security management from companies of different sizes took part in this study worldwide. The focus was on small and medium-sized companies with up to 5000 employees. The United States was a local focus, but companies from Western Europe were also represented above average. The most important economic sectors represented are the entire financial sector and the public sector with their traditionally high security requirements.

Four disruptive factors make investments in IT security necessary: ​​the growing use of services from the public cloud, which leads to the creation of hybrid infrastructures. In addition, there are new threats, stricter data protection laws such as the European General Data Protection Regulation (GDPR) or corresponding provisions in California and a lack of staff.

The most important motivation for cybersecurity spending in 2020 is regulatory compliance (69.4 percent), the reduction of security incidents (59.1 percent) and keeping up with the ever new cyber threats (56.9 percent).

Each of the disruptive trends mentioned above leads to characteristic investment decisions. According to 70.9 percent of those surveyed, the use of public cloud infrastructure induces investments in cloud security monitoring, 52.6 percent prioritize the procurement of cloud-specific CASB (Cloud Access Security Broker). New threats lead to the acquisition of network detection and response tools (50.5 percent) or tools to discover and respond to new endpoints (EDR).

The reaction to stricter legal requirements mainly consists of training staff (53.7 percent), but this also applies to “new threats” (50.2 percent, second place) and public cloud infrastructure (52.2 percent, square) 3) plays an important role. This is all the more difficult since personnel is still an increasingly difficult to obtain resource.

Only 30 percent of those surveyed also finance security functions through budgets in areas other than IT (Image: Sans)

At the same time, there is less money, or greater performance is expected for the same IT budget. But saving on personnel is not a good strategy, says SANS. 32.7 percent of the survey participants would invest new money in additional staff, only 17.8 percent in new technologies, and only 14.6 percent in training for existing staff.

However, this focus on new hires could prove fatal. Because well-trained personnel who receive regular training stay longer and are more motivated, SANS continues. In addition, more complex tasks also require more complex knowledge and corresponding tools, for which they have to be learned in order to use them. After all, the best analysis tool is of no use if the staff cannot interpret and react appropriately to the results found due to the lack of appropriate training and its application to the individual company environment.

According to SANS, there is more scope here to motivate the specialist departments to independently finance the security expenses they need. Only 30 percent of the companies surveyed do this. But there are undeniable advantages if security expenditures for IT infrastructure do not only have to be applied by the IT department. This procedure is always more transparent than IT security costs can be subsequently transferred to individual departments using keys that are difficult to understand.

Such an approach also strengthens communication between top management, departments and IT. And finally it follows the logic of the matter. After all, the business of the specialist departments increasingly depends on the functionality of applications. Contributing to their optimal security is in the fundamental interest of the departments.

This applies particularly to new types of digital applications, including the data they use. Because these applications will form the core of business activities in the future. They turn entire economic sectors upside down, change old value chains and enable completely different types of digital that no longer take account of industry boundaries.

Examples of such applications are the apps from sharing services such as Uber or Airbnb, to name just the best known, crowdfunding apps, industrial control and monitoring systems based on IoT, robo-advisors for financial investments or other fields and many more.

Microservice architectures instead of monolithic applications

Not only are the business areas changing, but also the applications themselves: In this “new tomorrow” of IT, applications consist of up to 15 tiers and many, sometimes hundreds, containerized microservices from different sources and with different tasks. These microservices are used in parallel by many apps. Your communication structures are highly complex. Many companies today have several hundred customer-specific applications, many of which consist of microservices. Interactions between microservices now account for 80 percent of network traffic.

This in turn makes it difficult to monitor and control the applications as a whole. Because practically every microservice involved in an application has to be checked for itself and in its interaction with other microservices. However, microservices that contribute to third parties, such as public cloud providers, can hardly be equipped with access options by the user.

In addition, even small changes in a microservice can influence many other components of an app – up to serious performance losses, security holes or crashes. However, it is difficult to keep the entire microservice network of an application constantly up to date.

Under these circumstances, it is difficult to impossible to find out quickly where the bottlenecks of an overall application actually are. And that also applies to the possible security gaps of the microservices involved, especially if they are contributed by third parties.

Zero trust instead of perimeter security

So what to do? The old paradigm of perimeter security is replaced by “zero trust”. Thereafter, data is considered a central asset. Unauthorized access, manipulation, change of location of the data must be prevented in any case. Therefore, every infrastructure component is considered untrustworthy until proven otherwise.

Three concepts are used: role-based identities for access, secure authentication mechanisms with passwords, keys and other authentication features and careful access management. Here, fine-grained access authorizations are defined.

All data sources and computing services are considered resources. All communication channels (including mobile or home office) are secured. Access rights are only granted on a session basis, whereby the rules must adapt to the respective situation. Used devices are constantly updated. Finally, the company continuously collects extensive information on the status of the infrastructure and the communication processes that take place in it.

The zero trust concept must be permanently maintained in the entire infrastructure by tools and processes, from the data center to internal or external hosting environments to the connecting LAN / WAN. Because the zero trust concept is still developing itself, constant attention is also required for current technological trends.

In detail, networks are micro-segmented so that “lateral movement” in the infrastructure is prevented as far as possible. In this way, the effects of successful attacks can be minimized by reducing the respective attack surface.

Zero trust concepts are usually implemented step by step. It begins with the detection, cataloging and classification of data and other assets. Then the micro-segmentation of the architecture follows, then the network and application traffic are analyzed and finally control systems and automated processes are applied. Overall, this can take several years.

In the context of zero-trust concepts and improved security architectures, it is particularly important to know the normal behavior of each application exactly in order to use it to determine a rule set and thresholds for anomalies. But this is only possible with the most modern tools.

According to analyzes by the market research company SANS, the current concepts of zero trust environments have an astonishing gap in this area: They are currently only designing one data and one control level.

Within a micro-segmented infrastructure, the data level contains all important data locations as well as systems, applications and users who access them via secure gateways. Important components of the data level are identity and access management systems (IAMS), authentication mechanisms and firewalls as well as other access restrictions.

At the control level, business rules are created and maintained, and the control systems are managed. It interacts with the protective gateways and supports quick reactions to potential attacks.

The three levels of a zero trust infrastructure: monitoring, data and control (Image: Sans) The three levels of a zero trust infrastructure: monitoring, data and control (Image: Sans)

Zero trust infrastructures need a monitoring level

But where do these two levels get the data on which they base their behavior and rules? A third layer closest to the infrastructure, the monitoring level, is required. Here, data is acquired, identified, classified and consolidated. This means that the entire infrastructure can be constantly monitored for malicious activities, threats identified and attack consequences mitigated.

For this you need elements such as packet brokers, systems for network packet analysis that enable intelligent analysis and filtering of the application process. The packet brokers collect all network traffic, decrypt it if necessary, filter out data that is not useful for analysis and send the required data to the respective special tools (e.g. threat detection systems, SIEM).

This filtering process is extremely important. Because most tools are overwhelmed if they are to examine the entire network traffic for certain characteristics. In addition, they often cannot because the traffic is encrypted with SSL or TLS. Only a third of the SSL / TLS traffic is currently decrypted for analysis purposes or only part of the communication is analyzed due to asymmetrical routing. Attackers are increasingly taking advantage of this.

Only a third of the SSL / TLS web traffic is currently decrypted for verification purposes (Image: Cyberedge) Only a third of the SSL / TLS web traffic is currently decrypted for verification purposes (Image: Cyberedge)

Modern, powerful network packet brokers prevent flooding of the special tools with irrelevant traffic. At the same time, they decrypt the otherwise illegible SSL / TLS connections and thus make them capable of analysis. This means that special tools for, for example, forensic, metadata or traffic analysis in heavily used hybrid networks with a high remote share can only do their job again. This allows errors to be analyzed more quickly, and attacks that are difficult to find, for example by insiders, to be detected and blocked more quickly.

The success of a three-level architecture from the monitoring, data and monitoring / control level should be measurably reflected: in the number of identified applications and their communication flows, the number of identified and designated units or sensitive data flows in the network, in the reduction of network access alarms, compromised systems and applications or in less time for detection, analysis and response to attacks.

Application Metadata Intelligence examines various characteristics of the application data streams in order to determine exactly which data can be assigned to which application and to generate metadata about the streams (Image: Gigamon)    Application Metadata Intelligence examines various characteristics of the application data streams in order to determine exactly which data can be assigned to which application and to generate metadata about the streams (Image: Gigamon)

Application Intelligence optimizes the work of downstream tools

The intelligent analysis of the traffic behavior of the applications (application intelligence) plays a particularly important role at the monitoring level of such architectures, which only enables the filtering and categorization of the application data streams. It is even better if such mechanisms are embedded in an overall concept.

One example of this is Gigamon Application Intelligence, a core component of Gigamon’s Visibility and Analytics Platform. Gigamon App Intelligence visualizes the app landscape, filters application data streams and supplies metadata for all applications. More than 3000 applications are identified and categorized, and traffic is filtered accordingly. Here, decisions are not only based on protocols, but many other features are used to identify the application. For example, in the area of ​​social media (Facebook) A distinction is then made between which platform the user comes from, what the user does (share information, receive information, play, etc.).

This means that only relevant streams reach downstream tools from third-party providers, while irrelevant data flow past them. Gigamon Application Intelligence delivers up to 7000 metadata attributes on network levels 4 to 7. Special tools up to SIEM can be connected via existing connectors.

The concept of Visibility and Analytics Fabric, scaled with the network and the analysis, monitoring and security solutions used. They filter all traffic flowing over the network. New tools for specific analysis tasks can be flexibly and easily connected to the Visibility and Analytics Fabric. Thanks to the Visibility and Analytics platform, you only have to deal with the data that is really necessary for your tasks. This increases the speed of analysis and the accuracy of the investigations.

The application possibilities of the Gigamon solutions around the Visibility and Analytics Platform and Application Intelligence are diverse: The existing shadow IT can be discovered and neutralized. This also applies to malicious applications or end devices, for example on the IoT edge. Risky configurations can be detected and automatically changed. The performance of the security toolchain is increasing. Threats are recognized and neutralized more quickly because special tools only analyze the data for which they were written.

Conclusion

In the age of New Tomorrow, hybrid cloud infrastructures, remote work, mobility and new digital applications combined from microservices dominate. This creates new challenges in monitoring and security. Because uncertainty is increasing, budgets and staff are still scarce and new threats have to be warded off constantly. At the same time, the new digital applications will form the basis of the business in the future and must therefore work optimally.

In order to survive in this landscape, it is necessary to set up three-tier zero trust infrastructures with a monitoring, a data and a control level. Because only an efficient and differentiated monitoring level enables the other levels to optimally fulfill their tasks.

The monitoring level should have sufficient intelligence to identify diverse applications and their normal behavior, to identify abnormalities in data streams and to filter out in a targeted manner that the downstream security special tools can analyze them.

At the same time, such tools ensure that the entire infrastructure and the applications running on it perform optimally by identifying, analyzing and eliminating errors and bottlenecks as quickly as possible. This means an important competitive advantage for companies in the digital age.

Online seminar: Network security and network monitoring in the new normal

The Gigamon Visibility Platform is the catalyst for the fast and optimized provision of data traffic for security tools, network performance and application performance monitoring. Find out in this webinar how Gigamon solutions can increase the efficiency of your security architecture and save costs.

Related Posts

Leave a Comment