Enabling secure and scalable artificial intelligence for Defense Department missions depends on deploying the right semiconductors across the AI lifecycle, from data sourcing and model training to deployment and real-time inferencing.
Enabling secure and scalable artificial intelligence architectures for Defense Department and public sector missions depends on deploying the right compute technologies across the entire AI lifecycle from data sourcing and model training to deployment and real-time inferencing – in other words, drawing conclusions.
At the same time, securing the AI pipeline can be accomplished through features like hardware-based semiconductor security such as confidential computing to provide a trusted foundation. This enables Zero Trust principles to be applied across both information technology (IT) and operational technology (OT) environments, with OT having different security needs and constraints compared to traditional enterprise IT systems. Recent DoD guidance on Zero Trust specifically addresses OT systems such as industrial control systems that have become attack vectors for adversaries.
Breaking Defense discussed the diverse roles that chips play across AI and Zero Trust implementations with Steve Orrin, Federal Security Research Director and a Senior Principal Engineer with Intel.
Breaking Defense: In this conversation we’re going to be talking about chip innovation for public sector mission impact. So what does that mean to you?
Orrin: The way to think about chip innovation for public sector is understanding that public sector writ large is almost a macro of the broader private sector industries. Across the federal government and public sector ecosystem, with some exceptions, you’ll find almost every kind of use case with much of the same usages and requirements that you find across multiple industries in the private sector: logistics and supply chain management, facilities operations and manufacturing, healthcare, and finance.
When we talk about chip innovation specific for the public sector, it’s this notion of taking private sector technology solutions and capabilities and federalizing them for the specific needs of the US government. There’s a lot of interplay there and, similarly, when we develop technologies for the public sector and for federal missions, oftentimes you find opportunities for commercializing those technologies to address a broader industry requirement.
With that as the baseline, we look at their requirements and whether there’s scalability of IT systems and infrastructure to support agencies in helping them achieve their goals around enabling the end user to perform their job or mission. In the DoD and specific industries, oftentimes they’ll have a higher security bar, and in the Defense Department there’s an edge component to their mission.
Being able to take enterprise-level capabilities and move them into edge and theater operations where you don’t necessarily have large-scale cloud infrastructure or other network access means you have to be more self-contained, more mobile. It’s about innovations that address specific mission needs.
One of the benefits of being Intel is that our chips are inside the cloud, the enterprise data center, the client systems, the edge processing nodes. We exist across that entire ecosystem, including network and wireless domains. We can bring the best of what’s being done at those various areas and apply them to specific mission requirements.
We also see a holistic view of cloud, on-prem, end-user, and edge requirements. We can look at the problem sets that they’re having from a more expansive approach as opposed to a stove pipe that’s looking just at desktop and laptop use cases or just at cloud applications.
This holistic view of the requirements enables us to help the government adopt the right technology for their mission. That comes to the heart of what we do. What the government needs is never one-size-fits-all when it comes to solving public-sector requirements.
It’s helping them achieve the right size, weight, and power profile, the right security requirements, and the right mission enabling and environmental requirements to meet their mission needs where they are, whether that be cloud utilization or at the pointy edge of the spear.
What’s required to enable secure, scalable AI architectures that advance technology solutions for national security?
From an Intel perspective, scalable AI is being able to go both horizontally and vertically to have the right kind of computing architecture for every stage of the lifecycle, from the training to the tuning, deployment, and inferencing. There are going to be different requirements from both SWaP and horsepower of the actual AI workload that’s performing.
Oftentimes you’ll find that it’s not the AI training, which everyone focuses on because it feels like the big problem, because that’s the tip of the iceberg. When you look at the challenge, sometimes it’s around data latency or input ingestion speeds. How do I get all of this data into the systems?
Maybe it’s doing federated learning because there’s too much data to put it all in one place and it’s all from different sensors. There’s actually benefits to pushing that compute closer to where the data is being generated and doing federated learning out at the edge.
At the heart of why Intel is a key player in this is understanding that it’s not a one size fits all approach from a compute perspective but providing that right compute to the needs of the various places in the horizontal scale.
At the same time there’s the vertical scale, which is the need to do massive large language model training, or inference across thousands of sensors, and fusion of data across multimodal sensors that are capturing different kinds of data such as video, audio, and RF spectrum in order to get an accurate picture of what’s being seen across logistics and supply chains, for example.
I need to pull in location data of where supplies are across vendor supply chains. I need to be able to pull in information from my project management demand signal to understand what’s needed where, and from mission platforms like planes, vehicles, weapons systems, radar stations, and sensor technologies to know where I’m deploying people. Those are different kinds of data sets and structures that have to be fused together in order to enable supply chain and logistics management at scale.
Being able to scale up computing power to meet the needs of those various parts is about how we’re providing the right architecture for those different parts of the ecosystem and AI pipeline.
Intel is helping defense and intelligence agencies adopt AI in ways that are secure, scalable, and aligned with Zero Trust principles, especially in operational technology environments as opposed to IT environments. Explain.
Operational technology has been around for a long time and is distinct from what is known as information technology or enterprise systems, where you have enterprise email and your classic collaboration and document management.
OT are the things that are not that – everything from fire suppression and alerting systems, HVAC, the robots and machines and error detection technologies that do quality control. Those are the operational technologies that perform the various task specific functions that support the operations and mission of an organization, they are not your classic IT operations.
One of the interesting transitions over the last many years is that the actual kinds of technology in those OT environments now look and feel a lot like IT. It’s a set of servers or client systems that are performing a fixed function, but the vendors are still your classic laptop and PC OEMs.
That mixing of the IT-style equipment in OT environments has created a tension point over the years when it comes to things like management and how you secure OT systems versus IT because OT systems are more mission critical. They’re more fixed-function and they often don’t have the space and luxury of having heaps of security tools monitoring them and performing because you have real-time reliability requirements like guaranteed uptime.
The DoD is coming out with new Zero Trust guidance specifically for OT, and the reason is because IT Zero Trust principles don’t easily translate to OT environments. There’s different constraints and limitations in OT, as well as some higher-level requirements, so there needs to be an understanding that there is a difference between the two when it comes to applying Zero Trust.
What do you suggest?
One of the first steps that I’ve talked about is getting the right people in the room for those initial phases of policy definition and architectural planning. Oftentimes you’ll find, and we’ve seen this a lot in the private sector, that when they start looking at OT, the IT people come up with security policy and force it on the OT systems. More often than not that fails miserably because OT just isn’t like IT. You don’t have the same flexibility and you have more stringent requirements for the actual operations side of OT.
That calls for crafting subset policies for that system and then containerizing that from a segmentation or a policy perspective and monitoring against that. The nice thing about OT is you don’t have to worry about every possible scenario. If you take the example of a laptop, users can do almost anything on their laptop. They can browse the Internet, send email, work with documents, collaborate on Teams calls. That means there’s a lot of security I have to worry about across the myriad usages enabled by that PC.
In an OT environment, you have a much smaller set of what it’s supposed to be doing, which means you can lock down that system and the access to that system to just key functions. That gives you a much tighter policy you can apply in OT that you wouldn’t have the availability of doing on the IT side of the camp. That way you can craft very specific policies, monitoring, and access controls specific to that particular OT or mission platform. That is a powerful way of applying it.
If you look at some of the guidance that’s coming out, the Navy has just recently published some specific OT guidance, NIST is coming out with OT guidance. It’s about tying the policies to the environment and being able to craft a subset of security controls specific to the domain, and then leveraging the right technologies that you need in order to achieve that goal.
Final thoughts?
Intel has technology and architectures that provide the right compute at the right place where and when the customer needs it. We understand the vertical and horizontal scale requirements, and provide the security, reliability, and performance for those environments that you need across your mission areas.
Second, when applying Zero Trust, it’s not one size fits all. You need to craft your Zero Trust policies, controls, and technologies to meet the requirements of your mission and of your enterprise IT and OT technologies.
Then, much of the technology and the security capabilities you need are already built in the system. You just need to take advantage of them, whether they be network segmentation, secure boot, and confidential computing. The hardware and software that has often already been deployed gives you a lot of those capabilities. You just need to leverage them.
To learn more about Intel and AI visit www.intel.com/usai.
Click this link for the original source of this article.
Author: Breaking Defense
This content is courtesy of, and owned and copyrighted by, https://breakingdefense.com and its author. This content is made available by use of the public RSS feed offered by the host site and is used for educational purposes only. If you are the author or represent the host site and would like this content removed now and in the future, please contact USSANews.com using the email address in the Contact page found in the website menu.