Blog

Implementing Zero Trust Architecture With Armedia

by | Jun 9, 2020 | CCMC, FedRAMP | 0 comments

Overview

Traditional network security protocols involve strategies to keep malicious actors out of the network but allow almost unrestricted access to users and devices inside. These traditional architectures leverage legacy technologies such as firewalls, virtual private networks (VPNs) and network access control (NAC) to build multiple security layers on the perimeters. So basically they trust users that are inside their infrastructure and verify those who are outside. If an attacker is able to gain access and enter the internal network then he can have access to the entire infrastructure.

Implementing zero trust architecture with Armedia

Let’s take a look at some attack progressions to see why it is not enough

  1. Phishing emails attack on employees.
  2. Compromised privileged machine
  3. Keylogger installed on corporate machines
  4. Developer password is compromised
  5. The production environment is compromised using privileged machine
  6. Database credentials compromised
  7. Exfiltration via compromised cloud hosting service and so on

These are the threats that come from within the network. How do you plan to prevent such scenarios?

The answer is Zero Trust Architecture.

What is a Zero Trust Architecture?

zero trust architecture

“Zero trust” is a term that was coined by John Kindervag in 2010. He proposed that companies should move away from perimeter-centric network security approaches to a model that involves continuous trust verification mechanisms across every user, device, layer, and application.

(source: https://www.csoonline.com/article/3247848/what-is-zero-trust-a-model-for-more-effective-security.html)

ZTA takes the “Never trust, Always verify” approach to implement strict identity verification for users and devices when they access resources either from inside the network perimeter or outside.

Once a user or device is inside the network, Zero Trust Architecture implements protocols for limited access to prevent malicious activities if the entity happens to be an attacker. Thus, if a security breach happens, it can not propagate to the whole infrastructure, as it happens in traditional network security architectures.

ZTA makes an assumption that the network is in a compromised state and every user and device must go through strict identity verification to prove that they are not malicious actors. This model treats all actors as an external actor and continuously challenges them to verify trust. Once verified, only required access is given.

How to implement a Zero Trust Architecture

zero trust architecture ZTA

Zero Trust Architecture is neither a technology, nor it is attached to any specific technology. It is a holistic strategy and approach to implementing network security based on several fundamental assertions { source: NIST 800-207 Draft}

  1. All computing services and data sources are considered as resources.
  2. No implied trust based on network locality.
  3. The network is always considered to be compromised.
  4. The network is under constant internal and external threats.
  5. Authenticate and authorize every user, device, and network flow.
  6. Strict trust verification before accessing each individual resource.
  7. Connection-based restricted access to each individual resource.
  8. Resource access is determined by Identity, behavioral attributes, and dynamic policies.
  9. The organization makes sure that all systems are well-secured.
  10. Monitor systems to make sure that they are well-secured.

What do all these principles mean? Well, this means that there should be no trust between any individual resource and the entity trying to access it. Hence the name Zero Trust Architecture. Implementing it requires leveraging multiple technologies to challenge and prove User trust, Device trust, Session trust, Application trust, and Data trust.

  1. Least-privileged access- Zero trust model requires defining privilege limits for users, devices, network flow, and applications, etc. Each user and device should have minimum privileges and access rights required for them to perform their jobs on a need-to-know basis.

A comprehensive audit should be done to get a clear picture of privileges for every entity in the network who needs access. This is a key security control in Zero Trust Architecture so the access-list must always be up-to-date.

  1. Network security policies: All standard network security policies must be in place in addition to Zero trust policies. They should also be tested regularly for effectiveness and vulnerabilities.
  2. Log and Inspect Traffic – All the activities and traffic mush be logged, monitored, and inspected continuously. Automation should be adopted to perform these operations faster and efficiently.
  3. Risk management and Threat Detection – Security analytics system must be in place to monitor suspicious activities based on monitoring, policies, behavior, and Risk-adaptive controls. Proactive threat detection and resolution should be the norm.

A Zero Trust Architecture implementation for network security must address the following

  1. Micro-segmentation – Divide network/data center into smaller individual parts that can be secured with different access credentials and policies. This should be done additionally with traditional security protocols such as VPNs, NAC, Firewalls, etc. This increases the security multi-folds by preventing bad actors from going on a malicious spree throughout the network even if they compromised one part.
  2. Verify Users and Devices – Users and devices both must undergo a strict authentication and authorization process based on the Eliminated Trust Validation approach. Any user or device can not be trusted and granted access to any resource if they are not on the access-list. Both must comply with security protocols and devices should have up-to-date software, malware and virus protection, updated patches, and encryption, etc.
  3. Multi-factor authentication – MFA is definitely more secure than just a password, for example, Biometrics and OTP are MFA methods. Devices supporting the MFA are increasing day by day. Almost all smartphones support it.

Multi-factor authentication is an effective way of performing Eliminated trust validation by adding an extra layer of security.

The Armedia way

We, at Armedia, specializes in implementing Zero Trust Architecture for network security. We have been constantly evolving our Zero trust security model with the best technologies and policies. In addition to the core components in an enterprise implementing a ZTA, several data sources provide input and policy rules used by the policy engine when making access decisions. These include local data sources as well as external (i.e., nonenterprise-controlled or -created) data sources.

 

Armedia Zero Trust Architecture

These include:

  1. Continuous diagnostics and mitigation (CDM) system: This gathers information about the enterprise asset’s current state and applies updates to configuration and software components. An enterprise CDM system provides the policy engine with the information about the asset making an access request, such as whether it is running the appropriate patched operating system (OS) and applications or whether the asset has any known vulnerabilities.
  • Armedia does not have a centralized policy engine and policy administrator for all resources; this is in part due to the significant effort to centralize all policy making and execution
  • We use Zabbix monitoring and altering, along with Grafana dashboards for visuals
  • We use OSSEC/Wazuh on Linux for intrusion detection; we abandoned the use of aide on Linux since to resource usage
  • We use Windows Endpoint Security on Windows Servers
  • We use ManageEngine Desktop Central to manage and patch Windows Servers
  • We use Red Hat Satellite to manage and patch Linux Servers
  • Systems and servers are patched and restarted on a scheduled basis
  1. Industry compliance system: This ensures that the enterprise remains compliant with any regulatory regime that it may fall under (e.g., FISMA, healthcare or financial industry information security requirements). This includes all the policy rules that an enterprise develops to ensure compliance.
  • Armedia uses Tenable Nessus with scanning profiles based on DISA SRG’s and STIG’s
  • Armedia uses Active Directory Group Policy Objects (GPOs) to ensure and reassert compliance within Windows Servers by managing key configuration files, security settings, application settings based on data stored in Configuration Management
  • Armedia uses Puppet within Red Hat Satellite to ensure and reassert compliance within Linux Servers by managing key configuration files, security settings, application settings based on data stored in Configuration Management
  1. Threat intelligence feed(s): This provides information from internal or external sources that help the policy engine make access decisions. These could be multiple services that take data from internal and/or multiple external sources and provide information about newly discovered attacks or vulnerabilities. This also includes blacklists, newly identified malware, and reported attacks to other assets that the policy engine will want to deny access to from enterprise assets.
  • Armedia uses GeoIP data on its perimeter firewalls, VPN, web application, and secure file transfer resources to blacklist regions, countries, and sites that are known threats, along with sites that perform scans or launch attacks
  1. Data access policies: These are the attributes, rules, and policies about access to enterprise resources. This set of rules could be encoded in or dynamically generated by the policy engine. These policies are the starting point for authorizing access to a resource as they provide the basic access privileges for accounts and applications in the enterprise. These policies should be based on the defined mission roles and needs of the organization.
  • Armedia does not have a centralized policy engine and policy administrator for all resources; this is in part due to the significant effort to centralize all policy making and execution
  • Data access policies are defined within the perimeter firewall for permitted inbound and outbound access; policies specifying access are assigned based on Directory group memberships
  • Data access policies are set based on Directory group assignments that align to client, customer, corporate resources for development, test, preproduction, and production environments
  • Data access policies are set within Infrastructure at the network and storage layers
  1. Enterprise public key infrastructure (PKI): This system is responsible for generating and logging certificates issued by the enterprise to resources, subjects, and applications. This also includes the global certificate authority ecosystem and the Federal PKI,4 which may or may not be integrated with the enterprise PKI. This could also be a PKI that is not built upon X.509 certificates.
  • Directory, Server, Infrastructure, and Network resources trust well-established Third Party Root Certification Authorities

– Federal PKI Root Certication Authorities resources not included can be added upon request and subsequent approval

  • Use Active Directory Certificate Services to manage and issue server, application, and user certificates for all resources in the environment

-Leverage integration with Active Directory for enrollment of Windows Servers, users, and applications

-Leverage automation within Linux Servers for server and application certificate management

  1. Data access policies: These are the attributes, rules, and policies about access to enterprise resources. This set of rules could be encoded in or dynamically generated by the policy engine. These policies are the starting point for authorizing access to a resource as they provide the basic access privileges for accounts and applications in the enterprise. These policies should be based on the defined mission roles and needs of the organization.
  • Armedia does not have a centralized policy engine and policy administrator for all resources; this is in part due to the significant effort to centralize all policy making and execution
  • Data access policies are defined within the perimeter firewall for permitted inbound and outbound access; policies specifying access are assigned based on Directory group memberships
  • Data access policies are set based on Directory group assignments that align to client, customer, corporate resources for development, test, preproduction, and production environments
  • Data access policies are set within Infrastructure at the network and storage layers
  1. ID management system: This is responsible for creating, storing, and managing enterprise user accounts and identity records (e.g., lightweight directory access protocol (LDAP) server). This system contains the necessary user information (e.g., name, email address, certificates) and other enterprise characteristics such as role, access attributes, and assigned assets. This system often utilizes other systems (such as a PKI) for artifacts associated with user accounts. This system may be part of a larger federated community and may include nonenterprise employees or links to nonenterprise assets for collaboration.
  • Use Active Directory for user authentication along with group, container, organizational unit (OU), and organization management

– Desktops, servers (Windows and Linux) are managed using AD – only exceptions are servers placed in a designated DMZ

  • Use Active Directory for user authorization based on group assignments; groups are defined for application-specific and system access based on the principle of least privilege
  • Use Active Directory Federation Services for federated authentication

– Permit access only for named and authorized users

  • Use Active Directory Certificate Services to manage and issue server, application, and user certificates for all resources in the environment

– Leverage integration with Active Directory for enrollment of Windows Servers, users, and applications

– Leverage automation within Linux servers for server and application certificate management

  1. Network and system activity logs: This is the enterprise system that aggregates asset logs, network traffic, resource access actions, and other events that provide real-time (or near-real-time) feedback on the security posture of enterprise information systems.
  • Logs are collected into an Elastic Stack deployment, have correlation applied, and are then surfaced through Kibana dashboards
  1. Security information and event management (SIEM) system: This collects security-centric information for later analysis. This data is then used to refine policies and warn of possible attacks against enterprise assets.
  • Armedia leverages Splunk and Elastic Stack for deriving and surfacing SIEM information

Establishing User Trust

When someone makes a request to access the network with any protocol, for example – VPN software, HTTPS, or TLS, We take the following actions based on the use-case

  1. Requests from unauthorized sources (a user or device), such as incorrect VPN, are rejected based on the policies.
  2. Once the request source is authenticated, a session is established and requests with correct session ID are granted role-based access.
  3. We use multiple protocols and technologies, such as secure LDAP, Kerberos, SAML, TLS over JDBC, ActiveMQ JMS, SSH, and web-based applications, etc

Confirming State of the Device

  1. VPN access by a client to a server – Necessary information is collected from client software to determine if the client device meets a security baseline to grant access. If the baseline is not met, the VPN server will reject the user without even presenting an authentication challenge.
  2. Browser with an SSO aware application- Necessary information is collected from the client’s browser for security baseline check and filtering out curl/wget requests. LDAP, SAML, and Kerberos, etc. are used to grant application access based on the use-case.
  3.  Developers with SSH connection – We leverage Kerberos for easy access and generic security service application program Interface (GSSAPI). Multi-factor authentication, encryption, and access-list protocols are also used to provide access depending on the use-case.

Advantages and benefits of Zero Trust Architecture

Our way of implementing the Zero Trust Architecture for network security, business processes, services, and systems has much-needed capabilities to enable the following advantages and benefits – {Source: 19 and 20 NIST 800-207 draft}

  1. Prevention of data and security breaches.
  2. Minimize lateral movement using the principle of micro-segmentation
  3. Security measures and protection can be easily expanded across multiple layers regardless of the underlying infrastructure.
  4. Get insights and analytics for users, workloads, devices, and components across the network and environments to enforce the required policy.
  5. Logging, monitoring, reporting continuously to detect the threats and timely response.
  6. Minimizing exposure and increasing security compliances.
  7. Minimized security gaps by covering a wide variety of attack points.
  8. Increased business agility to securely adopt cloud solutions.
  9. It requires less management and skill set than traditional security architectures.
  10. Save time, money, and efforts.

On a final note, In today’s digital landscape, organizations need to evolve their security protocols to fight off any malicious actor, inside the network or outside the network. Shifting to Zero Trust Architecture will enable you to protect your valuable network and data assets. The zero trust model enhances the security of an organization and provides it with substantial business advantages and benefits

Categories

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *