Intel’s confidential AI simplifies data security

confidential AI

Intel is pioneering the integration of AI with confidential computing technologies to help businesses address the complex data protection challenges associated with AI adoption. Anand Pashupathy, VP & General Manager of the Security Software & Services Division at Intel, discusses how confidential AI can simplify and secure AI at scale. Confidential computing secures data while it is in use within the processor and memory.

It allows encrypted data to be processed in memory without exposing it to other parts of the system through the use of a trusted execution environment (TEE). The TEE provides attestation, a process that cryptographically verifies its authenticity and correct configuration. When used alongside storage and network encryption, confidential computing protects data across all states: at rest, in transit, and in use.

Intel offers two confidential computing technologies suited to different security needs. Intel Trust Domain Extensions (Intel TDX) creates a TEE consisting of an entire virtual machine, often requiring few to no code changes. Intel Software Guard Extensions (Intel SGX) reduces the security perimeter to a single application or function, which may require more software steps but minimizes the software exposed to confidential data.

Confidential AI applies these computing principles to AI use cases, protecting AI models and associated data. It shields data used to train large language models (LLMs), the output generated, and the proprietary models themselves while in use.

Confidential AI enhances data security

Through rigorous isolation, encryption, and attestation, confidential AI prevents malicious actors from accessing and exposing data within the execution chain. Many organizations prioritize AI integration but remain concerned about securing AI models and algorithms against malicious attacks. Confidential AI helps companies enhance the security and privacy of their AI deployments.

It is especially beneficial for protecting sensitive or regulated data, aiding compliance with regulations such as HIPAA and GDPR, and preventing the theft or tampering of proprietary AI models. Attestation provides assurance that users are interacting with authentic models, not modified versions. Confidential AI’s benefits extend across the AI pipeline—from data preparation and consolidation to training, inference, and results delivery—bolstering protections at each vulnerable stage.

This is particularly advantageous in industries that handle sensitive information, such as healthcare, government, finance, and retail. Governments worldwide are implementing regulations to ensure AI deployments are secure and trustworthy. The European Union’s AI Act is one such example.

Intel collaborates with industry leaders to make AI usage more secure, addressing critical privacy and regulatory concerns at scale. By combining the strengths of AI and confidential computing, Intel aims to create a more secure environment for AI development and deployment, fostering trust and reliability in AI technologies across various sectors.