Automation, ID & Zero Trust: NIST Scientists Speak

WASHINGTON: Once considered by many to be merely a cyber buzzword, zero-trust security models are all the rage today. Ever since the NSA urged the defense sector to adopt zero trust, it’s been a topic top of mind for security pros.

Zero trust, as a concept, existed even before the term, which is a decade old now. But with remote workforces and evolving IT environments, threat actors, and cyberattacks, many believe zero trust’s time has finally come.

“Personally, incidents such as the OPM data breach discovered in March 2014 made it clear to me that it is inevitable to move to ZTA,” Oliver Borchert said in an interview. Zero trust, with its “assume breach” mentality, “does lead to a switch in thinking and design of security.”

Borcvhert and NIST Computer Scientist Scott Rose are co-authors of NIST’s Special Publication 800-207 Zero Trust Architecture (ZTA), arguably one of the best guides for cybersecurity pros. The two computer scientists focus on zero trust at NIST and spoke with me WHEN>???

Rose and Borchert characterize zero trust as a “cybersecurity paradigm” and “end-to-end approach.” As the name implies, zero trust is based on “the premise that trust is never granted implicitly but must be continually evaluated.” This differs from other models — notably, perimeter-focused security.

“Perimeter-based security models focus their security on traffic entering the network,” Borchert explained. “Once access is granted, all is fair game, and lateral movement is easier to achieve than in a ZTA environment.”

Until recently, the network perimeter could have claimed that rumors of its death were greatly exaggerated. After all, before last March, the vast majority of the global workforce still reported every day to a centralized workplace.

While it’s true off-premise cloud hosting and mobile devices had been on the rise for years, many an organization’s IT assets were still located within the same enterprise network perimeter as the employees who were using them. (Note, for example, how many organizations are still using on-premise versions of Microsoft Exchange.)

And then the pandemic happened. Workers went largely remote overnight. To complicate matters, every home office router, personal laptop, remote access appliance, and unattended social media account — as STRATCOM learned — became an initial threat vector itself or one that could provide a potential stepping stone into target enterprise networks. Attackers noticed.

This presented an old cybersecurity challenge at a new scale: How do network defenders know if the person and device remotely accessing the enterprise network is actually who and what they claim to be? Target learned this the hard way in 2014.

This is why many say identity is critical to zero-trust models. “It is important to know how resources are accessed,” Borchert notes. “And here, identity is not only related a user, but also a device such as a printer or an IoT device, to name a few. Ignoring identity and identity verification is similar to leaving the key to one’s house on the front porch for everyone to enter.”

To Borchert’s last point, recall that all second-stage SolarWinds hacks entailed compromising victims’ Microsoft Active Directory.

But identity — while “foundational” to zero trust, as Rose notes — is not the only component.

Indeed, Borchert says, “Zero trust is not one single product that one can be purchased off the shelf. Zero trust is a combination of technologies combined with the approach that each access to resources might be hostile or unauthorized and needs to be verified before it can be granted.”

In other words, zero trust requires many components. Yet, zero trust is a “dynamic” security model, which means these components are continuously communicating and constantly adapting the environment to new data in real time. How? The answer is multifaceted but includes two critical elements: The first is the trust algorithm, and the second is automation.

“The trust algorithm is the ‘brain’ of the system,” Borchert explains. “The trust algorithm uses different inputs… in its evaluation. This evaluation is not a one-time event. It is performed continuously, and the outcome can change depending on policies, changes in the data provided, and context.”

Borchert continues, “Automation is key, especially data such as threat analysis and behavioral data, which change over time and can affect the outcome of the algorithm.”

One challenge to this dynamic, adaptive security model: It produces a lot of data. “The volume of data that can be used to change policies is too great for human administrators to parse and act on quickly,” Rose explains. “Automation is key to digesting the necessary data and making changes to respond quickly to changes in the environment.”

Identity, the trust algorithm and automation are far from the only components or challenges to implementing a zero-trust model, but each represents a critical element.

Click this link for the original source of this article.

This content is courtesy of, and owned and copyrighted by, and its author. This content is made available by use of the public RSS feed offered by the host site and is used for educational purposes only. If you are the author or represent the host site and would like this content removed now and in the future, please contact using the email address in the Contact page found in the website menu.

Inline Feedbacks
View all comments
A better search engine:
Visit our Discussion Forum at

Follow us:
WP Twitter Auto Publish Powered By :